Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
Demo

Ansible and HashiCorp: Better Together

Presenters from Red Hat and HashiCorp showcase workflows that integrate the best parts of Ansible and the HashiCorp stack for configuration and provisioning.

Automation tools don’t have to be competitive, great things can be achieved when you combine great tools together and collaborate. Learn how users of the HashiCorp stack can use Ansible to achieve their goals of an automated enterprise—through complimentary security, image management, post provisioning configuration, and integrated end to end automation solutions.

This talk is presented by Dylan Silva, principal product manager of Ansible at Red Hat, and Sean Carolan, solutions engineer at HashiCorp.

Speakers

Transcript

Sean: I'm Sean Carolan, solutions engineer with HashiCorp. I'm based out of Austin, Texas.

Dylan: I'm Dylan Silva, the product manager for Ansible Engine or Ansible Core, or just Ansible project—whatever you want to call it. I'm out of this fair Bay Area of San Francisco.

Sean: So today we're going to present to you our musings on the subject of how our tools work better together.

Dylan: A lot of you and a lot of our users pretty much like to peg us as an either/or story, and our goal here is to really tell you how these two tools, or how the HashiCorp suite of tools and the Ansible suite of tools are really very synergistic with each other. So just kick back, enjoy the ride with us and hear us muttering amongst ourselves on how these work together.

Sean: How many folks in the room today are using Terraform? Okay. Good.

Dylan: And keep your hands up. How many of you are actually Ansible users as well? All right. A fair amount.

Sean: Nice.

Dylan: I'll apologize in advance. I've got a boring slide coming, but we'll get past that one pretty quickly.

Sean: We'll try to spice it up.

Why do you need more than Ansible?

Dylan: So, the first question. Well, we look at it this way, from Red Hat Ansible automation side, which really is the culmination of engine, tower, galaxy, Ansible vault, it's how do we take that tool set and take the community that comes from it and extend it out to the rest of the ecosystem. So, occasionally you'll hear us mention Ansible as the glue of all that is automation, all that is the DevOps tool ecosystem that we all work with.

Taking that step back, Ansible doesn't necessarily have to own and do every single task that it sets out to do. So, being that glue or being the orchestrator, think of it as the composer of a nice symphonic piece, we can reach out and tell other tools and work with other tools to do the task that it's best suited for. So, we're not that big instrument that owns the whole piece. There are other instruments that can do the job better than us, or can actually do it in a sense that we wouldn't actually be able to tackle it with. So that being said...

Sean: Today we'll be showing you how three different HashiCorp tools can benefit the Ansible user. First we'll take a look at how HashiCorp Vault—our secrets management product—and how it compares to Ansible Vault. Next we'll show you how Ansible can be combined with Terraform or Packer to enable powerful and efficient build pipelines.

A tale of two Vaults

There are many products and projects that contain "vault" in their name. When you think of a vault, you might think of a giant safe in a bank with a big door, and you can lock your secrets inside of the vault. HashiCorp Vault can certainly be used to store your secrets, but it can also generate new secrets or even encrypt your data on the fly. Think of a modern, multi-cloud distributed API-driven secrets management engine.

Dylan: And we have a lot of those same concepts as well with our Ansible Vault, but we want to expand on this by saying a couple of things.

Sean: HashiCorp Vault comes in both open source and enterprise versions. The Vault Cluster can store secrets or generate dynamic credentials. Multiple authentication methods are possible, so Vault is easy to integrate with the provisioning and config management tools, just like Ansible or Terraform.

Dylan: And when we talk about Ansible vault, it really is just a feature to us. It's a place where you can type protected, sensitive data, and have it vaulted to use in a playbook run. That's pretty much it. When we think of places to secure your passwords or tokens or other authentication methods, that's really something that HashiCorp Vault is really good for. We actually suggest to you users to push toward something like that.

Sean: How many of you are using Ansible Vault today? Show of hands. Okay, a good handful. Where do you store the password? Okay, we have a thing for that in case you need a place to store the password.

What is Terraform?

All right, so brief review. Some of this will be review for most of you. In case any of you are brand new to Terraform, here's a real quick, brief introduction.

Terraform is an open source command line tool that can be used to provision any kind of infrastructure on dozens of different platforms and services. Terraform code is written in HCL or the HashiCorp Config Language. You can see an example of that up here. HCL is easy to learn and easy to troubleshoot. It's meant to strike a balance between human-friendly and machine-readable code.

With Terraform you simply declare resources and how you want them configured and then Terraform will map out the dependencies and build everything for you. In a moment we'll show you a demo where Terraform stands up a server and then calls on Ansible to configure it.

What is Ansible Automation

Dylan: And then when we talk about Ansible, there are two things that we like to touch on.

  1. It's the project itself. The engine. That's what drives the automation forward for Ansible, and everything is perfectly described in that Ansible playbook, as you guys know.

  2. And then Tower as the second part, is the framework that sits on top of the engine to drive that automation. So that is pretty much our glue between all of the different tools in the ecosystem, to work together in harmony.

And then here is an example of a playbook for those of you who don't know it. I won't read it to you, but it's in YAML syntax, so very human readable as well from top to bottom, runs tasks sequentially by default. You can see all the different highlights of the things that make up a part of a playbook.

So moving on to Terraform.

A graph for success

Sean: We're back to Terraform. Does everybody know what this is? Yes. This is a graph of a bunch of Terraform code. So when you run Terraform, it very quickly and efficiently crawls through your code and puts everything in the correct order. I used a free, open source tool called Blast Radius to create this graph.

When you run Terraform Plan, it builds the graph before it actually goes and deploys your infrastructure. So this graph represents all the variables, resources, and dependencies required to stand up a single virtual machine in Azure. Now just imagine how complicated this can get when you need to stand up entire networks of machines, including load balancers and platform services. Terraform automatically maps out all these dependencies in the correct order for you.

Ansible-managed Packer

Let's talk about Packer. Who's using Packer today? All right, nice. Packer is the third HashiCorp tool that we mentioned. Packer builds machine images on different platforms. The modern operations team is actually a software delivery team. Servers are no longer physical machines that you set up and build, instead they're software artifacts that we deliver with CI/CD pipelines.

Why might you want to build your images with Packer? First of all, Packer is easy to use and it works great with your existing Ansible code. It allows you to create security-hardened images or preinstall large software packages for quick deployments or autoscaling. You can bake the latest OS patches into your images on multiple platforms, and you can ensure that the same OS image is being used on-prem and in the cloud.

Our first use case here is building a simple Amazon machine image, or AMI as they're called, using Packer and Ansible. Packer works with all the major cloud platforms as well as VMware, OpenStack, and Docker.

Dylan: So the idea is basically taking the creation of an image and extending that next step onto Ansible, so the concept of configuring and making that beautiful golden image that we all look towards deploying in our environments, right, that's the step that Ansible can do for Packer when you're building that image.

But then, the concept that we'd also like to talk about is the little interweaving of it, is Ansible working with Packer and telling Packer to take that step to build an image. So, basically building an Ansible workflow to go through the process of building a golden image and then deploying it.

Sean: I like to call this DevOps-ception.

Terraform calling Ansible

So here's another use case. You can use Terraform to call Ansible. Terraform is a great infrastructure provisioning tool, but you may have noticed that it doesn't come with a config management system. That's where Ansible comes in. We use Terraform to stand up virtual machines or cloud instances, and then we hand over the reins to Ansible to finish up the configuration of our OS and applications.

Dylan: And then once again, Ansible calling Terraform. In Ansible 2.5 we released a Terraform module to do just that. So within that module you can actually tell Terraform to run a Plan, to actually apply and set up your Terraform environment right then and there. So, going through that flow, you could start with Packer and then move onto Terraform, all through one Ansible playbook.

Better together

Sean: Speaking of DevOps-ception, we have this one slide.

Dylan: Here you go. This is the idea that we're trying to tell everybody to think about. It doesn't have to be one or the other.

Sean: Yes. Let's see if we can describe what's going on here:

  1. We have Packer calling Ansible to build our machine images. Those are going to live in the artifact repo.

  2. Then Ansible calls Terraform to build an instance or multiple instances from those images that we created.

  3. Along the way, you might fetch some secrets from Vault.

  4. Then we call Ansible again to finish configuring the machine to do any last mile config that you need to do to get it ready for production.

So the two tools can actually be used in a very complimentary manner.

Live demo

Dylan: So I think this is where you point people to your actual demos and stuff.

Sean: Let's do the demo.

How do I use Ansible with Terraform? Let's walk through that example first. Pretty standard Terraform code. We're standing up a Google compute instance. With Terraform we have the concept of a provisioner. The provisioner is the thing that's going to run your shell script or your Ansible code to finish configuring the OS and applications that live on the machine.

You might notice that we have a remote exec, and it's just doing an echo command. Why would that be there? Well, the way you run Ansible with Terraform using local exec, if you don't have a remote exec in here, what will happen is the local exec attempts to run as soon as the machine is spun up. SSH isn't ready yet and the command will fail. So this is a little bit of a hack that we use. The remote exec ensures that SSH is up and listening, and so we just run this little echo command, and you can put any command you want in there, and that way, once SSH is up, you can run your Ansible playbook the same way you always have. So local exec is one method to run Ansible on your machines using Terraform, and it's probably going to be the most comfortable and familiar if you're a long-time Ansible user. We're SSHing into the machine and we're running our playbook, just like we always have.

So I've got an example of that down here. We'll just go ahead and run it in my other terminal. So I see an Ansible cow. That's a good sign, right? That was a previous run, and as you know, if we run Terraform again, we get the same result. So I've deployed a simple cat app. I heard you can score internet points with cats, so we put a cat in our presentation.

The second way you can run Ansible and Terraform together is using the remote exec method. How's the text size in the back? Can you folks read this? All this code is online, by the way. We'll give you the link and that will be posted later.

The remote exec can be handy when you don't have SSH access to the target host. So you need to run Ansible on that machine, but you might not be able to SSH to it directly. Or, you know, you may not be able to connect to it. We can use remote exec, you just have to figure out a way to get your playbooks onto the machine. So here I've kind of hacked together some code that will push all of the Ansible code out there, install Ansible, and now we're running it from the remote host itself. It's a little bit more work, but there are some use cases and scenarios where this could be handy.

And then the final one is Packer. Packer is really easy to get started with. If you're not using Packer today, go home and take it for a spin. You can take an existing playbook, drop it into a config file, and then you run your Packer command, as I've done here, Packer build, and you can see what happens. Packer spins up a new instance. It'll even create a temporary key pair for you, so you don't have to worry about that or how to connect to the machine. It'll get on the machine, configure it, and then snapshot it into an image that you can use.

You can actually automate all of this too, using a CI/CD build pipeline or something, and this way you have an image factory. Maybe you would like to have the same version of Red Hat in both your cloud and your on-prem environments. Packer can enable that for you.

So this is the code. This will be posted later if you want to get a copy of this. Also, we have tutorials on our website that explain how to use these two tools together.

Dylan: Just real quick too, we haven't shown a demo on this, but from a HashiCorp Vault side of things, in Ansible we provide a lookup plugin, for those of you who don't know that exists. So you're able to take data out of HashiCorp Vault and use it in your playbook runs as well. There's some documentation on that as well.

Sean: Yes, that's documented here. It's actually called the hashi_vault plug-in.

Q&A

So that brings us to questions. What questions do you have for us? I see some hands through the light.

Audience: Can you utilize the credentials within HashiCorp Vault as a credential within Tower?

Dylan: As of right now, no. That's actually something from a tower side of things that we're actually adding as a feature in the future, so stay tuned for that. Right now, no, but that's something that we're working towards.

Audience: Can you talk a little bit more about running Ansible directly on a target machine?

Sean: Yes. A little bit more in the way of moving parts, right? Because you need, obviously, the Ansible binary and the command itself to be able to run, and you've got to have a way to feed your playbooks or get them somehow onto that machine. So if you can do those two things, either with a shell script or a little wrapper, you can run the remote way where all of the Ansible activity is happening on the machine itself, and you're just doing it to local host instead of doing it remotely over SSH.

Audience: [inaudible]

Dylan: Yes, that would definitely be one thing, if it were systems that only had access to that DMZ, per se, that's one route that you would be taking. Another use case that I would be thinking of is when you actually have a Terraform Enterprise system and an Ansible tower type system, that's the remote exec right there. Most of that will be done over API, most likely, but from a callback side of things, there may be some instances where Ansible itself has to be interactive with Terraform itself, so that's that.

I think the main use case, though, would be DMZ related things like you only have access to that host from that location and that part of the network. That's a good question though.

Audience: [inaudible]

Dylan: On the Ansible side or on the...

Audience: [inaudible]

Dylan: That's a good one. I never actually thought of that. Thank you.

Sean: Output the host name and put it in the inventory?

Dylan: Yes, but I'm wondering if there's a way that we could leverage Enterprise to generate a dynamic inventory. Because all of the data that's coming from Terraform has the source of truth at that point.

Sean: That's right. So you could, with Terraform Enterprise, have remote workspaces, advertising, or have those outputs available. You could just fetch them with an API call.

Dylan: Yes, and since that API is presenting in JSON, there is the way that Ansible consumes that data already, right? So it would just be the basic key/value pairs that would come from any dynamic inventory plugin and put it there, but I like that as an idea. That's something that we'll definitely be talking about together, because I hadn't thought about that.

Sean: If you want to chat some more after, just come and see us after the talk.

Audience: [inaudible]

Dylan: I think that's pretty much the whole point of this, right? It's kind of building that awareness and getting people to realize that there is a lot of synergy between the tools. As far as us building those integrations together, that's something that's a work in progress with each other. We've got a full list of road-mapped items as to companies to work towards and start delivering, so you'll start seeing all of that over time as they come.

Sean: I got a request to repeat each question for the recording.

Sean: Kevin. Hi Kevin.

Kevin: I kind of see this as a useful pattern for systems that require shared services where you can't get away from justifying your infrastructure as policy. We still need configuration management. I don't know if you have any thoughts on that.

Dylan: So, it was more of a statement of this is a good way to prove that there is still value for configuration management outside of system policy or infrastructure as code.

My views on the subject are definitely akin to that. I also, I mean it always gets to the baked versus fried discussion, right? Are we baking a fully golden image, or are we getting all the pieces together and frying it at the end. I'm more of the latter in my past operations. I feel like it's better to get as much together as you possibly could and build that image and then do your deployments at the end, because there is always going to be that little bit of configuration management you have to do. When I think of Ansible, the bigger scheme of things, there's more to it than just the systems being set up, right? You have your application monitoring you have to do and orchestrate. You have to tell that to set up and set up the alerts for all of that.

I always think of that bigger picture, and I think from a policy standpoint, that's always one thing that's lacked when it comes to talking about that policy, to your actual reviewers of your—let's say that you're actually about to go public and you need everything reviewed over. Those are some things that are overlooked that I always liked to highlight when I was in operations.

Sean: I liked Kevin's approach because it's very practical. There are going to be shared services that you stand up and just leave alone and, you know, it's important to have good config tools for building and maintaining those things.

Audience: Splunk?

Sean: Splunk for example, yes. That's a good one.

Dylan: That's the thing, right? Is there are so many tools in this ecosystem that we all have to work with, and it's just a matter of how do you get them all working with each other. That's kind of the story here today.

Sean: Good. Yes.

Audience: In one of your examples, you had Terraform calling Ansible and then Ansible calling Ansible. Have you guys given much thought over this, is there an effective way for Terraform to call Ansible and then Terraform to call Ansible instead of Ansible calling Ansible, like some way of managing successful runs but just in Terraform?

Sean: Okay, so to kind of sum up the question, we showed you how to use Ansible and Terraform, and Terraform and Ansible, or Ansible to call Terraform and then call Ansible. What are some recommended patterns about how to do that to get the most of both tools? Would that kind of ...

Audience: Well more like, I was wondering if you've thought of an effective way for Terraform to call Ansible, and then Terraform to call Ansible again.

Dylan: I think one of the most effective ways is that we could actually have an Ansible provisioner, so that's one thing that we're actively working on as two companies and getting that built, so stay tuned for that.

applause

Dylan: Along those other lines, I would say it's more dogma at that point. I wouldn't necessarily want to prescribe to any person to say to use one or the other, because it really comes down to your actual corporate policies at that point. And there are some cases where users may want to go through Terraform Enterprise as opposed to Ansible Tower, and vice versa.

It's really, really specific that I personally wouldn't want to self-prescribe that. But, at least from the two tools together, there are some improvements.

Sean: I have a question. I'd like to turn this around. Folks who raise their hands for using both Terraform and Ansible, is there anyone calling Terraform with Ansible and orchestrating Terraform runs with Ansible? Raise your hand.

Dylan: How many of you are using the Terraform module today?

Sean: Yeah, I see a hand there. How do you do it? What's it look like?

Audience:

Sean: Just to repeat a couple of options: they have Terraform calling Jenkins, and then Jenkins will call Ansible and that way they know if there's a failure in the build. One more? Yeah.

Audience: I just had one more note.

Sean: Okay.

Audience: Which is that you may have the use case where you want to rely on Terraform Enterprise because you have an audit log, and user access.

Which means Ansible is not a use case for that.

Dylan: But for Ansible alone, correct. So the point was that Terraform Enterprise has a good audit log of what's happening in the Cloud. I think we have the same thing to a point in Ansible Tower, as well, but that's more just access to Tower and who is running Ansible jobs in there.

That's more of the data that's shared between the tools. There's a lot that could be done there about getting that data somewhere that can do it. Think of things like Splunk or ELK, right, going back to that previous discussion. There's a lot of work that we can do there together that we're thinking about, too.

Audience: Very useful for security and collaboration.

Dylan: Oh yeah.

Gentleman in the second row had a question. He's been so patient.

Audience: Have you used local exec or low exec to have to have to shift playbooks to the instances that we used remote exec or the AWS user data to call Tower and make a callback to the instance.

Dylan: So the question was have we used remote exec to call Tower and callback on Ansible.

Dylan: So that's actually stuff that we're working on today, is seeing how we can get Terraform Enterprise to integrate with Ansible Tower to do that callback functionality.

Speaker 5: We need to have the key for our Terraform configuration updated with what we want our host to be, we want Tower to know what should be hosted and what is [inaudible].

Dylan: Yeah, definitely. And I also think of it this way: you could have Terraform Enterprise call Ansible Tower after those systems are up and running, and then there can be that data sent back in that callback to Terraform and say, "This is the current state of the Terraformed world, and this is how all my systems are set up, and this is what's been deployed to them." Yep, totally can see that as an option.

Sean: I see some folks back here. How about you in the Docker shirt?

Audience: I was wondering about Consul integration points.

Sean: Consul integration points with Ansible.

Dylan: If I recall correctly, there is somewhere out in the ecosystem that has a Consul inventory, but I think it was a dynamic script. What I would love to see is somebody in the community actually come out and build an inventory plugin.

For those of you that don't know, there's a transition from inventory scripts or dynamic inventory scripts to inventory plugins. We're remastering the whole Ansible code base to be more plugin based. What's that's giving to you, the end user, is an inventory cache that you can pull data from. I think things like Consul will lend to having that cache available to them, because that's already dynamic in and of itself. We want to get more data out and usable by the end user in Ansible playbook runs. That's one thing I would like to see.

There's also a lookup plugin that could be in there as well, just pulling data out. The sky's the limit with Consul. It's not something that we're actively working on right now because we wanted to tackle the Terraform provisioner issue first, and kind of just tell this story from a Terraform standpoint.

Audience:

Dylan: Yeah, definitely. The statement there was that there's also value on the flip side of things, in an Ansible playbook run. Once that's done, the data that came from that you can put back into Consul. I would even take it further and say HashiCorp Vault is another place that you can put data into that's been generated. Let's say that a system's been spun up and it has a key that's been generated as a part of that. You could throw that into HashiCorp Vault.

Both come into play. It's putting data back into it as well that's part of orchestrational flow that you can go through. Definitely something to think about, and what we're working towards.

There's another two questions over there.

Audience: Are we getting a Terraform provider for Ansible Tower?

Dylan: We haven't talked about that yet, but that's not a bad idea, actually. I'm not the Ansible Tower PM but I'll definitely talk to him about that and see what his thoughts are, because there's a lot from an Ansible Tower side of things that we're thinking about for ... I don't want to say Federation, but for Tower Federation on how we manage multiple Tower instances in your environment. We're taking the docker approach for that, or a container approach.

Yeah, I think that's actually a good idea from a Tower side of things, because we have Tower modules. I don't see why we couldn't have Terraform providers for Tower as well. Yeah, not a bad idea.

Sean: Any other questions?

Dylan: Something to take back to Armon and team.

Sean: Yeah. We'll go badger the people in the purple shirts.

Dylan: There was another question back there.

Audience: Just a real practical question: I started using Packer and Ansible together recently, and I was wondering, is there any recommended folder structure you have for ... Should they go side to side? Should one be a sub-repo, or…?

Sean: The question was, "Is there a recommended folder structure for your Packer code?" I generally like to break it up by cloud or platform, and then I try to put all of the shared files and scripts or playbooks, what have you, into the same folder where each of those individual Packer templates can reach them.

Audience: I'm saying for the remote exec on that Ansible play, when you're running the Packer, is that horizontal, to the side, or…?

Dylan: I'm trying to remember mine. I think I had them off to the side. What I did at my last job is I actually built new AMIs every single day. Those were CentOS images getting put into Amazon. I believe I did have the playbooks in the same repo on the side. Yeah, that is what it was.

We had one large code base, and for our deployment scripts, we had three directories. We had a Packer directory, we had a Terraform directory, and then we had a playbook directory. Then all the roles and everything playbooks-wise lived in there. We would just walk our way up to Packer and start building those AMIs and then provision them with Terraform and then go back to Ansible to do the application deployments at that point in time.

That worked well for me. I will say, just out of personal preference, stay the hell away from Git sub-modules, but once again, that's dogma. I don't like sub-modules. That's me.

Sean: We're just about at time. We'd like to take one last opportunity to thank you all for coming. Without our users, this conference wouldn't be possible. Pound some coffee and work through the next session, and then come join us in the Tonga room tonight for a party sponsored by Microsoft.

Dylan: Thanks everybody.

Sean: Thank you.

More resources like this one

4/11/2024FAQ

Introduction to HashiCorp Vault

Vault identity diagram
12/28/2023FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

3/15/2023Presentation

Advanced Terraform techniques

3/14/2023Article

5 best practices for secrets management