Skip to main content

Terraform MCP server updates: Stacks support, new tools, and tips

Terraform MCP server 0.4 adds new features for smarter, safer DevOps. This post also features tips for using all our MCP servers effectively.

General AI models are great, but they guess wrong when it comes to your infrastructure. They don’t know your private modules, your security rules, or your internal secrets.

You can bridge that gap using the Terraform MCP Server. For those who are not familiar with HashiCorp’s Model Context Protocol (MCP) servers, it links generic LLMs to your actual ops environment. They give the AI the live data and documentation it needs to stop hallucinating and give better responses.

For Terraform Enterprise users subject to strict data sovereignty requirements or limited internet connectivity, the MCP server provides an option to enable AI automation in air-gapped environments. By running a local AI model (such as a local LLM), developers can take advantage of the MCP server without external internet access, ensuring compliance with security and sovereignty policies.

Today, we are excited to add even more features to our Terraform MCP server and share tips on how to use all of our MCP servers. With new support for Terraform Stacks and policy sets, your AI assistant becomes more context-aware, secure, and ready for real enterprise work.

»The Terraform MCP server: Smarter, safer automation

We’ve previously shared how the Terraform MCP server can benefit your workflow in our documentation and video deep-dive.

Today's release further improves the experience for HCP Terraform and Terraform Enterprise users. We’ve strengthened the connection to your infrastructure with a fresh set of tools and fixes designed to eliminate friction.

Instead of wasting time digging for internal security rules or writing repetitive boilerplate, the Terraform MCP server now has:

  • [New] Support for Stacks: Deploy and manage Terraform Stacks using natural language.
  • [New] Streamlined policy management: We’ve added a new tool called attach_policy_set_to_workspaces, allowing you to handle governance workflows directly via chat.
  • [New] Granular control: You can now choose exactly which capabilities to enable. Use the new --toolsets flag to toggle between public registry access, private registry access, or operations tools. You can even enable specific tools one by one with --tools for maximum security.
  • Security policy lookup: Intelligently look up Sentinel policies and CIS benchmarks to recommend security rules alongside your infrastructure code.
  • Access to private modules: Read private modules hosted in your internal registry to write code that matches your standards, not just public examples.
  • Natural language commands: Use natural language to create workspaces, update variables, and tag resources securely.
  • [Updated] Self-hosted improvements: We fixed an issue where the "Skip TLS" flag wasn't propagating correctly — a critical fix for air-gapped or self-hosted environments using custom certificates.
  • [Updated] Smarter error handling: Input validation errors now return as clear "Tool Execution Errors," helping the LLM understand why it failed so it can correct itself immediately.

Here’s a look at what Terraform MCP server usage looks like in an air-gapped demo environment:

Air-gapped Terraform AI for regulated teams

»The full HashiCorp MCP ecosystem: Vault & Consul

We have MCP servers for more than just Terraform. We’re also actively developing three other MCP servers to give you AI assistance with enterprise context when working in HashiCorp Vault, HCP Vault Radar, and Consul.

The Vault MCP server, available on GitHub and DockerHub, turns complex secret management into a conversation. If your AI spots a hard-coded secret, the MCP server can:

  • Create a secure mount in Vault
  • Write the secret securely
  • Refactor your code to fetch it dynamically

The Vault Radar MCP server allows you to stop digging through dashboards and just ask questions like, "What are my top security risks?" It queries GitHub, AWS, and Azure to give you an instant, prioritized list of leaks.

The Consul MCP server identifies the right API calls for service mesh by asking questions like, "Do I have any over-privileged tokens?" or "Check for secure configuration." The server translates your questions and creates precise Consul queries.

»How to use these tools effectively

To get the best results when using these MCP servers, follow these two tips:

1. Force MCP server use. Sometimes the AI guesses instead of asking the server. In editors like VS Code (using Cursor), start your prompt with #terraform or #vault. This forces the AI to check the MCP server for the real answer.

2. Set your standards. Every team works differently. Drop a markdown file (like agents.md) into your workspace to tell the AI your preferences (e.g. whether you want CLI commands, API calls, or VCS workflows). There are some examples available for the Terraform MCP server.

»Learn more and get started today

Ready to give your AI assistants a fuller context for your infrastructure and secrets landscape?

Give our other MCP servers a try here:

If you don’t have access to all of the Vault and Terraform features, get started with HashiCorp Cloud Platform (HCP), the fastest way to set up cloud-managed versions of those products.

More posts like this