A smoother HCP Terraform workspace experience
Learn how to automate HCP Terraform workspace setup and onboarding with the TFE provider, a custom module, and good requirements gathering.
HCP Terraform allows organizations to scale the management of their cloud infrastructure. However, onboarding multiple teams with unique requirements and workflows introduces its own set of challenges. This blog post will demonstrate how you can automate your HCP Terraform workspace setup by using the TFE provider and building an onboarding module.
» A common scenario
To illustrate a common scenario, imagine a tech company. We’ll call them “HashiCups”. Their platform team has successfully built their initial cloud landing zones using HCP Terraform.
Cloud landing zone: A pre-configured, secure, and scalable environment that serves as a foundation for deploying and managing cloud resources.
Now they’re ready to make their first attempt at on-boarding an application team to HCP Terraform, with many more teams to follow. They realize that manually creating and configuring workspaces for each team is time-consuming and prone to errors. They need an automated onboarding process that's not only efficient but also scalable and consistent.
They’ve decided they’re going to add another abstraction layer to codify and automate the onboarding setup for HCP Terraform workspaces, teams and processes. They’ll do this using Terraform as the engine once again, with the TFE provider.
With this provider they can build a reusable Terraform module (we’ll call it the “workspace onboarding module”) that encapsulates best practices for workspace creation, permission management, and team onboarding. This approach should allow HashiCups to scale effortlessly as they bring more teams into their infrastructure as code ecosystem.
» Onboarding the first team
The HashiCups platform team will start their onboarding process by having a meeting with the application team. To prepare for this meeting, they’ll review their objectives.
The platform team has two main objectives here:
- Get the application team up and running as quickly as possible.
- Create and test their reusable onboarding pattern (which is codified in a Terraform module) so that they can iron out any issues before they offer it to other teams.
Based on these objectives, in their first meeting, they will ask:
- If the team is familiar with workspaces in HCP Terraform and provide an overview if necessary.
- What their environment landscape looks like (the promotion path i.e. path from dev>test>prod).
- Who should be permitted to change infrastructure configuration, and if those permissions depend on the environment.
» What is an HCP Terraform workspace?
In HCP Terraform, a workspace is a fundamental concept that is used to organize infrastructure as code, so it makes sense to start the meeting reviewing what workspaces are and what’s the impact on the team’s IaC code.
An HCP Terraform workspace is an isolated environment where a specific team or working group can manage a specific set of infrastructure resources. Each workspace maintains its own state file, which is important for tracking the current state of your infrastructure and ensuring that Terraform can accurately plan and apply changes to it. It provides a collaborative space for teams to manage infrastructure as code, with capabilities such as version control integration, secure state management, and role-based access control.
» Workspace scoping recommendations
Our recommended practice is that you structure your HCP Terraform setup so that each workspace corresponds to a specific:
- Business unit
- Application name
- Infrastructure layer
- Promotion path environment (i.e. dev>test>prod)
- and/or region
Some example workspace names for a simple application following this recommendation could include:
- bu1-billing-prod-us-east
- bu1-billing-staging-us-east
For more complex scenarios, teams will need to divide their workspaces into even smaller scopes. If they have a large number of resources to deploy that becomes harder to manage and decipher. For example:
- bu2-orders-networking-prod-us-east
- bu2-orders-compute-prod-us-east
- bu2-orders-db-prod-us-east
- bu2-orders-networking-staging-us-east
- bu2-orders-compute-staging-us-east
- bu2-orders-db-staging-us-east
The main takeaway here is you can delineate your workspace scopes according to how you think you should isolate each environment to ensure three things:
- Adequately limiting the potential impact or 'blast radius' of any change-related failures
- Preventing performance degradations from affecting other workspaces
- Accommodating different infrastructure sizing and configuration needs for development, testing, and production scenarios
» The requirements
After asking the questions listed earlier and building a general understanding of workspaces and how they can be scoped, the HashiCups platform team has gotten a set of requirements from the application team.
The application team explained that they use a 3-environment landscape (development, staging and production), which will translate into three workspaces. Through meetings with other stakeholders, such as security, operations leadership, and platform team leadership (sometimes these best-practice-building groups are called a “center of cloud excellence (CCoE)”), the platform team has an additional set of requirements for HCP Terraform workspace default settings:
- Each application team should have a group that is responsible for workspace administration and another group that has the necessary permissions to use the workspaces.
- Powerful data removal commands like
terraform destroy
should not be allowed for production. Only development and staging environments. - Technical leadership has decided on workspace naming conventions. Each name will have only two pieces of information: An application identifier, followed by an environment identifier (
<application>-<environment>
), and the workspace name must be in lowercase. - Generally the environment used by the end users must use the
prod
environment identifier.
After completing the discovery process,the platform team can now create the first version of the workspace onboarding module.
» Making the onboarding pattern reusable
The workspace onboarding module will generate the workspaces needed for the first application team. Rather than hardcoding their team-specific requirements into the workspace, the onboarding module will have empty variable fields so that any team in the organization can use the same module to customize workspaces for their own specific needs. For example, while the first team had 3 environments, some teams have 2 environments, and some teams have more than 3 environments. The number of environments generated will need to be a variable field in the module.
» Create the variable definitions
The first file we’ll create is the variables.tf
file, where we’ll define four variables:
-
application_id
, to hold the application (unique) identifier. -
admin_team_name
, to hold the name of the (pre-existing) HCP Terraform team representing the application administrators. -
user_team_name
, to hold the name of the (pre-existing) HCP Terraform team representing the application infrastructure engineers (or developers). -
environment_names
, to hold the list of environment names (dev, prod, etc.) in this application’s environment landscape.
The environment_names
variable also needs a validation block to ensure that there is an environment named prod
, as per the organization’s requirements.
variable "environment_names" {
description = "A list of environment names"
type = list(string)
validation {
condition = contains([for env in var.environment_names : lower(env)], "prod")
error_message = "The list of environment names must contain 'prod'."
}
}
variable "admin_team_name" {
description = "The name of the team for the workspace administrators"
type = string
}
variable "user_team_name" {
description = "The name of the team for the workspace users"
type = string
}
variable "application_id" {
description = "The identifier of the application"
type = string
}
» Create the workspaces
The next step is creating the main.tf
file, where admins will define the workspaces and team permissions. When creating the workspace for the prod
environment, the team configures it so that destroy plans aren’t allowed, as per the organization’s requirements. They’ll also use string interpolation to name the workspace according to the organization’s naming convention. See how this looks in the configuration below.
resource "tfe_workspace" "workspace" {
for_each = toset(var.environment_names)
name = "${lower(var.application_id)}-${lower(each.value)}"
description = "Workspace for the ${each.value} environment of application ${var.application_id}"
allow_destroy_plan = each.value == "prod" ? false : true
}
data "tfe_team" "admin_team" {
name = var.admin_team_name
}
data "tfe_team" "user_team" {
name = var.user_team_name
}
resource "tfe_team_access" "admin_team_access" {
for_each = toset(var.environment_names)
workspace_id = tfe_workspace.workspace[each.value].id
team_id = data.tfe_team.admin_team.id
access = "admin"
}
resource "tfe_team_access" "user_team_access" {
for_each = toset(var.environment_names)
workspace_id = tfe_workspace.workspace[each.value].id
team_id = data.tfe_team.user_team.id
access = "write"
}
Note that this example is using data sources to fetch information about the admin_team
and the user_team
. An alternative would be to accept the team ID instead of the team name as an input variable. Using the team ID as an input variable can simplify the code and make it more efficient in terms of data processing. However, it may also make it less intuitive for a human to understand the input at a glance.
» Make outputs available
One of the key principles in infrastructure as code is composition. Composition in the context of IaC and Terraform refers to the practice of building complex configurations by combining smaller, reusable components. This approach enables modular, scalable, and maintainable infrastructure definitions.
To enable composition with modules, the team needs to share information using outputs. In this case, they made the IDs of the workspaces created for the application team available, as well as the IDs of the admin and user teams in the outputs.tf
file:
output "workspace_ids" {
description = "The IDs of the created workspaces"
value = { for k, v in tfe_workspace.workspace : k => v.id }
}
output "admin_team_ids" {
description = "The IDs of the admin teams"
value = data.tfe_team.admin_team.id
}
output "user_team_ids" {
description = "The IDs of the user teams"
value = data.tfe_team.user_team.id
}
For a more in-depth discussion about outputs in Terraform, have a look at this discussion from HashiConf 2024: Meet the experts: Terraform module design.
» Module tests
At this point, the team has a working module, but it’s still missing an important component: Terraform tests. These tests are necessary to ensure that as engineers improve the module they do not introduce bugs or break existing functionality.
Terraform tests live under the tests
directory in the module code repository.
» Test setup
The first step when writing a test suite is to ensure that the prerequisites are available. In this case, the prerequisites are the HCP Terraform teams for the workspace administrators and the workspace users.
To define the prerequisites, the platform team will create the file tests/testing/setup/main.tf
with the following content:
resource "tfe_team" "admin_team" {
name = "admins-test"
}
resource "tfe_team" "user_team" {
name = "users-test"
}
» Test suite
The next step is to write the test suite. The platform team will create tests that ensure that the validation code on the environment_names
variable works as expected.
To define the test suite, they’ll create the file tests/environment_landscape_validation.tftest.hcl
with the following content:
provider "tfe" {
organization = "<replace with your HCP Terraform organization name>"
}
variables {
admin_team_name = "admins-test"
user_team_name = "users-test"
application_id = "my-app"
}
run "setup" {
module {
source = "./tests/testing/setup"
}
}
run "invalid_environment_landscape_missing_prod_name" {
command = plan
variables {
environment_names = ["dev", "staging"]
admin_team_name = var.admin_team_name
user_team_name = var.user_team_name
application_id = var.application_id
}
expect_failures = [var.environment_names]
}
run "invalid_environment_landscape_incorrect_prod_name" {
command = plan
variables {
environment_names = ["dev", "staging", "production"]
admin_team_name = var.admin_team_name
user_team_name = var.user_team_name
application_id = var.application_id
}
expect_failures = [var.environment_names]
}
run "valid_environment_landscape" {
command = plan
variables {
environment_names = ["dev", "staging", "prod"]
admin_team_name = var.admin_team_name
user_team_name = var.user_team_name
application_id = var.application_id
}
}
run "workspace_name_in_lowercase" {
command = plan
variables {
environment_names = ["Dev", "Staging", "Prod"]
admin_team_name = var.admin_team_name
user_team_name = var.user_team_name
application_id = "My-App"
}
assert {
condition = alltrue([for ws in tfe_workspace.workspace : lower(ws.name) == ws.name])
error_message = "All workspace names must be in lowercase."
}
}
The test suite above does the following:
- Provides a valid TFE provider configuration to use.
- Ensures that the test prerequisites are present.
- Validates that passing an invalid environment landscape is detected and fails the
plan
operation. An invalid environment landscape is either missing theprod
environment or is using an incorrect name such asproduction
. - Validates that passing a valid environment landscape is successful.
- Validates that passing an application ID and/or environment names with capitalized letters still result in the workspace name being in all lowercase.
- Tears down the test prerequisites.
» Running the test suite
Executing the test suite requires access to HCP Terraform. Prior to running the test suite, the platform team will need to generate an API token with permissions to create teams and make it available to their execution environment. Here is an example of how to do this on Linux:
export TFE_TOKEN=<replace with the API token>
Executing the test suite is easy from the command line:
terraform test
Terraform will discover the available tests and execute them, reporting on the results:
tests/invalid_environment_landscape.tftest.hcl... in progress
run "setup"... pass
run "invalid_environment_landscape_missing_prod_name"... pass
run "invalid_environment_landscape_incorrect_prod_name"... pass
run "valid_environment_landscape"... pass
run "workspace_name_in_lowercase"... pass
tests/invalid_environment_landscape.tftest.hcl... tearing down
tests/invalid_environment_landscape.tftest.hcl... pass
Success! 5 passed, 0 failed.
» Provide documentation and examples
When developing a Terraform module, it is highly recommended to include two essential files: a comprehensive documentation file and a detailed changelog.
The documentation file, named README.md
, serves as a valuable resource for users of the module, providing clear instructions on its purpose, usage, input variables, outputs, and any specific requirements or dependencies. This documentation ensures that other team members or future maintainers can quickly understand and effectively utilize the module without extensive reverse engineering.
Equally important is the changelog file, named CHANGELOG.md
, which records all notable changes and version increments for the module over time. The changelog acts as a historical record, allowing users to track modifications, understand the evolution of the module, and make informed decisions about upgrades. Together, these files significantly enhance the module's usability, maintainability, and overall quality, fostering better collaboration and reducing potential issues arising from lack of information or miscommunication.
Example README.md
file:
# Terraform Workspaces Module
## Description
This module configures one or more workspaces for an application team, with one workspace corresponding to one SDLC environment. It uses the TFE provider to create workspaces in HCP Terraform.
## Usage
```hcl
module "workspaces" {
source = "path/to/your/module"
environment_names = ["dev", "staging", "prod"]
admin_team_name = "admin-team"
user_team_name = "user-team"
application_id = "my-app"
}
```
## Inputs
| Name | Description | Type | Default | Required |
|-------------------|--------------------------------------------------|-------------|-------------|----------|
| environment_names | A list of environment names | list(string)| n/a | yes |
| admin_team_name | The name of the team for the workspace administrators | string | n/a | yes |
| user_team_name | The name of the team for the workspace users | string | n/a | yes |
| application_id | The identifier of the application | string | n/a | yes |
## Outputs
| Name | Description |
|---------------------|------------------------------------------|
| workspace_ids | The IDs of the created workspaces |
| admin_team_ids | The IDs of the admin teams |
| user_team_ids | The IDs of the user teams |
Example CHANGELOG.md
file:
# Changelog
## [1.0.0] - 2025-01-XX
### Added
- Initial version of the Terraform module.
- Added support for creating workspaces in Terraform Cloud using the TFE provider.
- Added variables for environment names, admin team name, user team name and application ID.
- Added validation to ensure the environment names list contains "prod".
- Added outputs for workspace IDs, admin team IDs, and user team IDs.
- Added tests to validate the environment names and workspace name formatting.
» Recap and possible enhancements
Using an example scenario and company, this blog post has shown how to gather requirements to automate the creation of workspaces for an application team and translate those requirements into a reusable Terraform module, complete with code, documentation, and automated tests. Once created, the module should be published and used by future teams adopting HCP Terraform. You can learn more about publishing your module in this tutorial: Share modules in the private registry.
We intentionally used a simplified scenario to keep the post to a reasonable length and easy to understand, but some readers may argue that certain important aspects were omitted, and they’d be right. Let’s cover those aspects and flag them as potential enhancements to the example solution:
- Introduce HCP Terraform projects
- Configure workspace notifications
- Use dynamic provider credentials
- Module lifecycle management
HCP Terraform projects are used to group together multiple related workspaces and simplify configuration by allowing RBAC, variable sets, policy sets to be configured at the project level, rather than at the workspace level, easing the management burden.
It generally makes sense to create an HCP Terraform project per application team, and in our scenario, we could update the module to create a project and register all workspaces under it.
You can learn more about using HCP Terraform projects in this tutorial: Organize workspaces with projects.
Workspace notifications enable HCP Terraform to send notifications about run progress and other significant events to external systems, such as Slack, Microsoft Teams, or via email. Each workspace can have its own notification settings, allowing up to 20 different notification destinations per workspace.
In our scenario, the platform team could update the module to configure workspace notifications that trigger on the following events:
- Workspace events
- Drift (HCP Terraform detected a configuration drift)
- Check failure (HCP Terraform detected one or more failed continuous validation checks), if your code includes checks.
- Run events
- Needs attention (a plan has changes and HCP Terraform requires user input to continue).
- Errored (a run terminated early due to an error or cancellation)
Dynamic provider credentials in HCP Terraform represent a significant advancement in security and access management. This feature allows Terraform to automatically generate short-lived credentials for cloud providers on-demand, rather than relying on long-term, static access keys. When a Terraform operation is initiated, HCP Terraform requests temporary credentials from the cloud provider that are valid only for the duration of the specific task.
This approach substantially improves the security posture by minimizing the exposure window of access credentials, reducing the risk of unauthorized access if credentials are compromised. Additionally, it eliminates the need to manage and manually rotate long-term access keys, decreasing administrative overhead and the potential for human error in credential management.
You can learn more about using dynamic provider credentials in this tutorial: Authenticate providers with dynamic credentials.
Module lifecycle management is a systematic approach to control and maintain Terraform modules from rollout to retirement, provide module status visibility, and improve communication across teams. Key aspects of module lifecycle management include:
- Version control: Modules are tracked in version control systems to maintain history and allow collaboration between modules maintainers.
- Testing: Writing and executing tests for your module code lets you check if everything works correctly before publishing a new version.
- Publishing new versions: Releasing an updated copy of your module in a private registry with new features or fixes, similar to releasing a new version of an application.
- Deprecation handling: The ability to mark modules as deprecated and communicate when modules should no longer be used.
The module in this post is designed to onboard a new application team, but it could, with minor changes, also apply to shared services maintained by a platform team: The application_id
variable could be renamed shared_service_id
and this would be the start of a good foundation for shared services. As a platform team engineer, you can create a module to not only speed up the onboarding of new teams but also solve onboarding for shared services that your team manages.
» Additional resources
- Automate HCP Terraform workflows
- Try HCP Terraform’s free tier which includes up to 500 resources per month.
Sign up for the latest HashiCorp news
More blog posts like this one
![ServiceNow Terraform plugin updates: No-code execution mode, key-value tags, and enhanced security](/_next/image?url=https%3A%2F%2Fwww.datocms-assets.com%2F2885%2F1593775212-servicenow-logo.jpg&w=1920&q=75)
ServiceNow Terraform plugin updates: No-code execution mode, key-value tags, and enhanced security
The ServiceNow plugins for Terraform are being updated with new enhancements that implement no-code execution mode, key-value tags, and increased encryption security enhancements.
![Which Terraform workflow should I use? VCS, CLI, or API?](/_next/image?url=https%3A%2F%2Fwww.datocms-assets.com%2F2885%2F1714171194-blog-library-product-terraform-black.jpg&w=3840&q=75)
Which Terraform workflow should I use? VCS, CLI, or API?
Learn about the three levels of HCP Terraform run workflows and key considerations to guide your decision on when to use each approach.
![Access Azure from HCP Terraform with OIDC federation](/_next/image?url=https%3A%2F%2Fwww.datocms-assets.com%2F2885%2F1608227936-blog-library-product-azure.png&w=3840&q=75)
Access Azure from HCP Terraform with OIDC federation
Securely access Azure from HCP Terraform using OIDC federation, eliminating the need to use long-lived credentials for authentication.