HCP Terraform Operator is now certified on Red Hat OpenShift
The HCP Terraform Operator for Kubernetes can now be found in the Red Hat Ecosystem Catalog. Several new features have just been added.
We are excited to announce that the HCP Terraform Operator for Kubernetes (formerly known as Terraform Cloud Operator) is now certified for Red Hat OpenShift. This certification marks a significant milestone in our commitment to provide reliable, scalable, and secure integrations between Terraform and Kubernetes for streamlined workflows across hybrid and multi-cloud environments.
» What is the HCP Terraform Operator?
The HCP Terraform Operator allows users to manage HCP Terraform and Terraform Enterprise workspaces, agent pools, and resources directly from their Kubernetes clusters. It brings Terraform’s state handling, sequential run execution, and established patterns for secret injection and resource provisioning into Kubernetes-native workflows.
The HCP Terraform Operator reached general availability in November 2023. Since then we’ve made several updates to it, and today we’re focusing on three major updates:
» Flexible deletion policy
With enhanced control over workspace deletion, users can now manage resources more safely, especially during migrations or when moving workloads to different clusters. This update ensures that resources are preserved by default, reducing the risk of accidental deletions and supporting seamless infrastructure transitions.
Here’s how this looks in the configuration file:
---
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
name: this
spec:
organization: kubernetes-operator
token:
secretKeyRef:
name: tfc-operator
key: token
name: this
deletionPolicy: retain
The deletionPolicy
field offers the following options:
-
retain
(default): keeps the workspace even after the custom resource is deleted. -
soft
: deletes the workspace only if it has no managed resources. -
destroy
: destroys all managed resources, then deletes the workspace and the custom resource. -
force
: immediately deletes both the workspace and the custom resource without waiting.
» Improved agent scaling
The latest updates introduce more intelligent scaling capabilities, allowing teams to better manage resource use and respond efficiently to workload demands. By optimizing the timing between scaling events and establishing a cooldown period after each run, this feature ensures resources are used more effectively in dynamic environments, enhancing overall operational efficiency. You can see how the agent scaling works in this example configuration:
apiVersion: app.terraform.io/v1alpha2
kind: AgentPool
metadata:
name: this
spec:
name: this
organization: kubernetes-operator
token:
secretKeyRef:
name: tfc-operator
key: token
agentTokens:
- name: alpha
autoscaling:
maxReplicas: 4
minReplicas: 0
cooldownPeriod:
scaleUpSeconds: 30
scaleDownSeconds: 1800
agentDeployment:
replicas: 0
» Red Hat OpenShift support
Red Hat OpenShift is a very popular Kubernetes distribution, and now the HCP Terraform Operator is certified for OpenShift. This means the HCP Terraform Operator can now be found in the Red Hat Ecosystem Catalog. To begin using the HCP Terraform Operator on OpenShift, follow these steps:
- Access the OpenShift Console: Start by logging into the OpenShift Container Platform administrator console.
- Navigate to OperatorHub: Within the console, go to the OperatorHub, where you can find a wide range of certified operators.
- Search for HCP Terraform Operator: In the search bar, type "HCP Terraform Operator" to locate the operator.

- Install the Operator: Click on the HCP Terraform Operator and follow the prompts to complete the installation.

» Start using the HCP Terraform Operator
Refer to our official documentation for more details and best practices on setting up and configuring the HCP Terraform Operator. You can test out the Kubernetes Operator with a few additional tutorials on more specific workflows:
- Deploy infrastructure with the HCP Terraform Operator
- Manage agent pools with the HCP Terraform Operator
We are committed to continually enhancing the HCP Terraform Operator to meet your evolving needs. We welcome your feedback and encourage you to share any bugs or feature requests on our GitHub issues page.
If you are new to Terraform, now is the perfect time to explore the benefits of infrastructure as code. Sign up for a free HCP account and unlock the potential of streamlined, scalable infrastructure management today.
Sign up for the latest HashiCorp news
More blog posts like this one

New in HCP Terraform: Linked Stacks, enhanced tags, and module lifecycle management GA
Module lifecycle management goes GA, linked Terraform Stacks simplify cross-Stack dependency management, and enhanced tags ease tag management at scale.

Patterns to refactor infrastructure as code for compliance
Use policy as code and immutability to refactor infrastructure to comply with organizational standards.

Terraform provides more flexible controls with project-owned variable sets
Project-owned variable sets simplify management, reduce dependencies, and allow for more flexible control over access and usage.