Using the Kubernetes and Helm Providers with Terraform 0.12
Tip: HashiCorp Learn has a consistently updated tutorial on Managing Kubernetes Resources via Terraform. Visit this page for the most up-to-date steps and code samples for using the Terraform Kubernetes provider.
With Terraform 0.12 generally available, new configuration language improvements allow additional templating of Kubernetes resources. In this post, we will demonstrate how to use Terraform 0.12, the Kubernetes Provider, and the Helm provider for configuration and deployment of Kubernetes resources.
The following examples demonstrate the use of Terraform providers to deploy additional services and functions for supporting applications:
- ExternalDNS deployment, to set up DNS aliases for services or ingresses.
- Fluentd daemonset, for sending application logs.
- Consul Helm chart, for service mesh and application configuration.
Deployment of these services happens after creating the infrastructure and Kubernetes cluster with a Terraform cloud provider.
» ExternalDNS Deployment
A Kubernetes deployment maintains the desired number of application pods. In this example, we create a Kubernetes deployment with Terraform that will interpolate identifiers and attributes from resources created by the cloud provider. This alleviates the need for separate or additional automation to retrieve attributes such as hosted zone identifiers, domain names, and CIDR blocks.
We can use ExternalDNS to create a DNS record for a service upon creation or update. ExternalDNS runs in Kubernetes as a deployment. First, we translate the Kubernetes deployment configuration file for ExternalDNS to Terraform’s configuration language (called HCL). This allows Terraform to display the differences in each section as changes are applied. The code below shows the Terraform kubernetes_deployment resource to create ExternalDNS.
locals {
name = "external-dns"
}
resource "aws_route53_zone" "dev" {
name = "dev.${var.domain}"
}
resource "kubernetes_deployment" "external_dns" {
metadata {
name = local.name
namespace = var.namespace
}
spec {
selector {
match_labels = {
app = local.name
}
}
template {
metadata {
labels = {
app = local.name
}
}
spec {
container {
name = local.name
image = var.image
args = concat([
"--source=service",
"--source=ingress",
"--domain-filter=${aws_route53_zone.dev.name}",
"--provider=${var.cloud_provider}",
"--policy=upsert-only",
"--registry=txt",
"--txt-owner-id=${aws_route53_zone.dev.zone_id}"
], var.other_provider_options)
}
service_account_name = local.name
}
}
strategy {
type = "Recreate"
}
}
}
Note that we use the Terraform 0.12 first class expressions, such as var.namespace
or local.name
, without the need for variable interpolation syntax. Furthermore, we resource reference the hosted zone resource we created with the aws_route53_zone
. The dynamic reference to the AWS resource removes our need to separately extract and inject the attributes into a Kubernetes manifest.
» Kubernetes DaemonSets
To collect application logs, we can deploy Fluentd as a Kubernetes daemonset. Fluentd collects, structures, and forwards logs to a logging server for aggregation. Each Kubernetes node must have an instance of Fluentd. A Kubernetes daemonset ensures a pod is running on each node. In the following example, we configure the Fluentd daemonset to use Elasticsearch as the logging server.
Configuring Fluentd to target a logging server requires a number of environment variables, including ports, hostnames, and usernames. In versions of Terraform prior to 0.12, we duplicated blocks such as volume
or env
and added different parameters to each one. The excerpt below demonstrates the Terraform version <0.12 configuration for the Fluentd daemonset.
resource "kubernetes_daemonset" "fluentd" {
metadata {
name = "fluentd"
}
spec {
template {
spec {
container {
name = "fluentd"
image = "fluent/fluentd-kubernetes-daemonset:elasticsearch"
env {
name = "FLUENT_ELASTICSEARCH_HOST"
value = "elasticsearch-logging"
}
env {
name = "FLUENT_ELASTICSEARCH_PORT"
value = "9200"
}
env {
name = "FLUENT_ELASTICSEARCH_SCHEME"
value = "http"
}
env {
name = "FLUENT_ELASTICSEARCH_USER"
value = "elastic"
}
env {
name = "FLUENT_ELASTICSEARCH_PASSWORD"
value = "changeme"
}
}
}
}
}
}
Using Terraform 0.12 dynamic blocks, we can specify a list of environment variables and use a for_each
loop to create each env
child block in the daemonset.
locals {
name = "fluentd"
labels = {
k8s-app = "fluentd-logging"
version = "v1"
}
env_variables = {
"HOST" : "elasticsearch-logging",
"PORT" : var.port,
"SCHEME" : "http",
"USER" : var.user,
"PASSWORD" : var.password
}
}
resource "kubernetes_daemonset" "fluentd" {
metadata {
name = local.name
namespace = var.namespace
labels = local.labels
}
spec {
selector {
match_labels = {
k8s-app = local.labels.k8s-app
}
}
template {
metadata {
labels = local.labels
}
spec {
volume {
name = "varlog"
host_path {
path = "/var/log"
}
}
volume {
name = "varlibdockercontainers"
host_path {
path = "/var/lib/docker/containers"
}
}
container {
name = local.name
image = var.image
dynamic "env" {
for_each = local.env_variables
content {
name = "FLUENT_ELASTICSEARCH_${env.key}"
value = env.value
}
}
resources {
limits {
memory = "200Mi"
}
requests {
cpu = "100m"
memory = "200Mi"
}
}
volume_mount {
name = "varlog"
mount_path = "/var/log"
}
volume_mount {
name = "varlibdockercontainers"
read_only = true
mount_path = "/var/lib/docker/containers"
}
}
termination_grace_period_seconds = 30
service_account_name = local.name
}
}
}
}
In this example, we specify a map with a key and value for each environment variable. The dynamic "env"
block iterates over entry in the map, retrieves the key and value, and creates an env
child block. This minimizes duplication in configuration and allows any number of environment variables to be added or removed.
» Managing Helm Charts via Terraform
For services packaged with Helm, we can also use Terraform to deploy charts and run tests. Helm provides application definitions in the form of charts. Services or applications often have official charts for streamlining deployment. For example, we might want to use Consul, a service mesh that provides a key-value store, to connect applications and manage configuration in our Kubernetes cluster.
We can use the official Consul Helm chart, which packages the necessary Consul application definitions for deployment. When using Helm directly, we would first deploy a component called Tiller for version 2 of Helm. Then, we would store the Consul chart locally, deploy the chart with helm install
, and test the deployment with helm test
.
When using Terraform Helm provider, the provider will handle deployment of Tiller, installation of a Consul cluster via the chart, and triggering of acceptance tests. First, we include an option to install_tiller
with the Helm provider.
provider "helm" {
version = "~> 0.9"
install_tiller = true
}
Next, we use the Terraform helm_release resource to deploy the chart. We pass the variables to the Helm chart with set
blocks. We also include a provisioner to run a set of acceptance tests after deployment, using helm test
. The acceptance tests confirm if Consul is ready for use.
resource "helm_release" "consul" {
name = var.name
chart = "${path.module}/consul-helm"
namespace = var.namespace
set {
name = "server.replicas"
value = var.replicas
}
set {
name = "server.bootstrapExpect"
value = var.replicas
}
set {
name = "server.connect"
value = true
}
provisioner "local-exec" {
command = "helm test ${var.name}"
}
}
When we run terraform apply
, Terraform deploys the Helm release and runs the tests. By using Terraform to deploy the Helm release, we can pass attributes from infrastructure resources to the curated application definition in Helm and run available acceptance tests in a single, common workflow.
» Conclusion
We can use Terraform to not only manage and create Kubernetes clusters but also create resources on clusters with the Kubernetes API or Helm. We examined how to interpolate resource identifiers and attributes from infrastructure resources into Kubernetes services, such as ExternalDNS. Furthermore, we used improvements in Terraform 0.12 to minimize configuration and deploy a Fluentd daemonset. Finally, we deployed and tested Consul using the Terraform Helm provider.
Leveraging this combination of providers allows users to seamlessly pass attributes from infrastructure to Kubernetes clusters and minimize additional automation to retrieve them. For more information about Terraform 0.12 and its improvements, see our blog announcing Terraform 0.12. To learn more about providers, see the Kubernetes provider reference and the Helm provider reference.
Sign up for the latest HashiCorp news
More blog posts like this one
Fix the developers vs. security conflict by shifting further left
Resolve the friction between dev and security teams with platform-led workflows that make cloud security seamless and scalable.
HashiCorp at AWS re:Invent: Your blueprint to cloud success
If you’re attending AWS re:Invent in Las Vegas, Dec. 2 - Dec. 6th, visit us for breakout sessions, expert talks, and product demos to learn how to take a unified approach to Infrastructure and Security Lifecycle Management.
Speed up app delivery with automated cancellation of plan-only Terraform runs
Automatic cancellation of plan-only runs allows customers to easily cancel any unfinished runs for outdated commits to speed up application delivery.