Terraform Stacks, explained
Terraform Stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.
This post was updated in October 2024.
Terraform Stacks are a feature intended to simplify infrastructure provisioning and management at scale, providing a built-in way to scale without complexity. In this blog, we would like to provide more details about our vision, where we are right now, and where we are going next.
» What challenges do Terraform Stacks solve?
There are a number of benefits to using small modules and workspaces to build a composable infrastructure. Splitting up your Terraform code into manageable pieces helps:
- Limit the blast radius of resource changes
- Reduce run time
- Separate management responsibilities across team boundaries
- Work around multi-step use cases such as provisioning a Kubernetes cluster
Terraform’s ability to take code, build a graph of dependencies, and turn it into infrastructure is extremely powerful. However, once you split your infrastructure across multiple Terraform configurations, the isolation between states means you must stitch together and manage dependencies yourself.
Additionally, when deploying and managing infrastructure at scale, teams usually need to provision the same infrastructure multiple times with different input values, across multiple:
- Cloud provider accounts
- Environments (dev, staging, production)
- Regions
- Landing zones
Before Terraform Stacks, there was no built-in way to provision and manage the lifecycle of these instances as a single unit in Terraform, making it difficult to manage each infrastructure root module individually.
We knew these challenges could be solved in a better and more valuable way than just wrapping Terraform with bespoke scripting and external tooling, which requires heavy lifting and is error-prone and risky to set up and manage.
» What are Terraform Stacks and what are their benefits?
Stacks help users automate and optimize the coordination, deployment, and lifecycle management of interdependent Terraform configurations, reducing the time and overhead of managing infrastructure. Key benefits include:
- Simplified management: Stacks eliminate the need to manually track and manage cross-configuration dependencies. Multiple Terraform modules sharing the same lifecycle can be organized and deployed together using components in a Stack.
- Improved productivity: Stacks empower users to rapidly create and modify consistent infrastructure setups with differing inputs, all with one simple action. Users can leverage deployments in a Stack to effortlessly repeat their infrastructure and can set up orchestration rules to automate the rollout of changes across these repeated infrastructure instances.
Stacks aim to be a natural next step in extending infrastructure as code to a higher layer using the same Terraform shared modules users enjoy today.
» Common use cases for Terraform Stacks
Here are the common use cases for Stacks, out of the box:
- Deploy an entire application with components like networking, storage, and compute as a single unit without worrying about dependencies. A Stack configuration describes a full unit of infrastructure as code and can be handed to users who don’t have advanced Terraform experience, allowing them to easily stand up a complex infrastructure deployment with a single action.
- Deploy across multiple regions, availability zones, and cloud provider accounts without duplicating effort/code. Deployments in a Stack let you define multiple instances of the same configuration without needing to copy and paste configurations, or manage configurations separately. When a change is made to the Stack configuration, it can be rolled out across all, some, or none of the deployments in a Stack.
- Provision and manage Kubernetes workloads. Stacks streamline the provisioning and management of Kubernetes workloads by allowing customers to deploy Kubernetes in one single configuration instead of managing multiple, independent Terraform configurations. We see Kubernetes deployments that often have this challenge where there are too many unknown variables to properly complete a plan. With Stacks, customers can drive a faster time-to-market with Kubernetes deployments at scale without going through a layered approach that is hard to complete within Terraform.
» How do I use a Terraform Stack?
Stacks introduce a new configuration layer that sits on top of Terraform modules and is written as code.
» Components
The first part of this configuration layer, declared with a .tfstack.hcl
file extension, tells Terraform what infrastructure, or components, should be part of the Stack. You can compose and deploy multiple modules that share a lifecycle together using what are called components in a Stack. Add a component block to the components.tfstack.hcl
configuration for every module you'd like to include in the Stack. Specify the source module, inputs, and providers for each component.
components.tfstack.hcl
component "cluster" {
source = "./eks"
inputs = {
aws_region = var.aws_region
cluster_name_prefix = var.prefix
instance_type = "t2.medium"
}
providers = {
aws = provider.aws.this
random = provider.random.this
tls = provider.tls.this
cloudinit = provider.cloudinit.this
}
}
You don’t need to rewrite any modules since components can simply leverage your existing ones.
» Deployments
The second part of this configuration layer, which uses a .tfdeploy.hcl
file extension, tells Terraform where and how many times to deploy the infrastructure in the Stack. For each instance of the infrastructure, you add a deployment block with the appropriate input values and Terraform will take care of repeating that infrastructure for you.
deployments.tfdeploy.hcl
deployment "west-coast" {
inputs = {
aws_region = "us-west-1"
instance_count = 2
}
}
deployment "east-coast" {
inputs = {
aws_region = "us-east-1"
instance_count = 1
}
}
When a new version of the Stack configuration is available, plans are initiated for each deployment in the Stack. Once the plan is complete, you can approve the change in all, some, or none of the deployments in the Stack.
» Orchestration rules
Defined in HCL, Orchestration rules allow customers to automate repetitive actions in Stacks. At the launch of the public beta, users can auto-approve a plan when certain orchestration checks and criteria are met. For example, the following orchestrate block automatically approves deployments if there are no resources being removed in the plan.
orchestrate "auto_approve" “safe_plans” {
check {
#check that there are no resources being removed
condition = context.plan.changes.remove == 0
reason = "Plan has ${context,plan.changes. remove} resources to be removed."
}
}
HCP Terraform evaluates the check blocks within your orchestrate block to determine if it should approve a plan. If all of the checks pass, then HCP Terraform approves the plan for you. If one or more conditions do not pass, then HCP Terraform shows the reason why, and you must manually approve that plan. This simplifies the management of large numbers of deployments by codifying orchestration checks that are aware of plan context in the Terraform workflow.
» Deferred changes
This is a feature of Stacks that allows Terraform to produce a partial plan when it encounters too many unknown values — without halting the operations. This helps users work through these situations more easily, accelerating the deployment of specific workloads with Terraform. Deferred changes allow users to enable the Kubernetes use case mentioned earlier.
Consider an example of deploying three Kubernetes clusters, each with one or more namespaces, into three different geographies. In a Stack, you would use one component to reference a module for deploying the Kubernetes cluster and another component for a module that creates a namespace in it. In order to repeat this Kubernetes cluster across three geographies, you would simply define a deployment for each geography and pass in the appropriate inputs for each, such as region identifiers.
If you decided to add a new namespace to each of your Kubernetes clusters, it would result in plans queued across all three geographies. To test this change before propagating it to multiple geographies, you could add the namespace to the US geo first. After validating everything worked as expected, you could approve the change in the Europe geo next. You have the option to save the plan in the Asia geo for later. Having changes that are not applied in one or more deployments does not prevent new changes that are made to the Stack from being planned.
See how Kubernetes clusters are deployed in Terraform Stacks by watching this video:
» What’s next for Terraform Stacks?
At HashiConf 2024, we announced the HCP Terraform public beta of Stacks. During the public beta, users can experiment with Stacks to provision and manage up to 500 resources for free, including the new Kubernetes use case and the two features mentioned earlier: deferred changes and orchestration rules. Once users reach the limit, they will enter a degraded mode that allows 0 Stack applies. Stack plans can still proceed, but only Stack plans to destroy resources can be applied until the RUM count is reduced to a number under 500. Go to HashiCorp Developer to learn how to create a Stack in HCP Terraform.
While our public beta is limited to HCP Terraform plans based on resources under management (RUM), certain Stacks functionality will be incorporated in upcoming releases of the community edition of Terraform. Workspaces will continue to have their use cases and Terraform will continue to work with both workspaces and Stacks.
We hope you’re as excited about Stacks as we are, and appreciate your support as we transform how organizations use Terraform to further simplify infrastructure provisioning and management at scale.
Sign up for the latest HashiCorp news
More blog posts like this one
New Terraform integrations with Crowdstrike, Datadog, JFrog, Red Hat, and more
12 new Terraform integrations from 9 partners provide more options to automate and secure cloud infrastructure management.
Terraform delivers launch-day support for Amazon S3 Tables, EKS Hybrid Nodes, and more at re:Invent
The Terraform provider for AWS now enables users to manage a variety of new services just announced at re:Invent.
HashiCorp at re:Invent 2024: Infrastructure Lifecycle Management with AWS
A recap of HashiCorp infrastructure news and developments on AWS from the past year, from a new provider launch to simplifying infrastructure provisioning and more.