Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
News

The Road to Packer 2.0

This talk will explain recent changes to Packer that support easier adoption of community plugins, and it will also look at the milestones still to complete on the path to Packer 2.0.

Speaker: Megan Marsh

»Transcript

Megan Marsh:

Hi everyone. I hope you're having an awesome HashiConf. Today, I'm going to be talking to you about Packer 2.0, what is changing and why I think it's going to knock your socks off. 

My name is Megan Marsh. I'm the engineering lead for Packer. I've been maintaining Packer for four years, and I've been the project lead for the last two. My GitHub handle is Swampdragons, or those of you who have interacted with the tool there. 

This talk isn't about me, though. It's about what you as users will need to do to prepare for our transition to Packer 2.0, and it's about why I'm excited for you to upgrade. I want all of you to have the best possible experience using Packer. All this stuff I'm going to touch on today is designed to make sure your experience with Packer is better and designed to make sure that the Packer community is set up to thrive. 

We're going to talk a bit about the forward-looking and, therefore, backwards-breaking changes that are part of the 2.0 upgrade so you can get your feet under you long before you ever have to flip the switch on installing and using 2.0. 

There are a few pieces to today's talk. First, I'm going to talk about the introduction of the new command-line command packer init. We've made several architectural changes to Packer to make it easier for the community to create, share, and install plugins for Packer. But, these changes come with some important implications is that you do need to be aware of. 

In the second part of this talk I'm going to be talking about HCL templates. We love the new HCL templates so much that we're confident you'll love them too, and we're going to deprecate the legacy JSON templates. Finally, I'm going to give you a sneak peek at some of the great stuff to come once we've made these huge changes.

 

»Packer Structure 101

Before we dig too deep into the’ init command itself, I think it will be useful to everybody to get a quick primer on Packer architecture. For those of you who saw my HashiConf US talk a couple of years ago, not a lot's changed since then, but I'm going to be talking to you about changes that are about to come down the pipe.

 

Packer uses four major component types to perform all its custom functionality. If you've written Packer JSON templates, you're going to recognize the categories listed here — builders, provisioners, post-processors — as root-level fields in the Packer template, and you probably have a general understanding of what they do. For example, builders create instances, manage the instance lifecycle, and then create artifacts based off those instances. Provisioners modify the instances when they're running., Then post-processors modify the artifact once an instance has been shut down.

 

The Packer core talks to all these components individually, and it orchestrates the builds based on your template — plugging the components you request into the right places in the core build. That's where we get the concept of Packer plugins. Packer doesn't care whether a component is bundled into the Packer core or whether it's an external binary. It just runs each component as a sub-process and talks to each one separately over a custom RPC. 

I use these terms — plugins and components — interchangeably on the slide, but there is a difference between the two of them. A plugin is a binary that Packer installs and can talk to. The component is a Packer-specific piece of code that fulfills one of those four interfaces — provisioners, post-processors, builders, and the new HCL-only data sources. And they talk to the Packer core over that RPC.

»Reachitecting Plugins 

Historically, there's been a one-to-one relationship between plugins and components, which means that if you wrote a standalone plugin for Packer, it would serve exactly one component. The name of your plugin on the file system would tell Packer what component type it was, and that name of the plugin on the file system would be what you would use in your Packer template to call the component.

 

This was useful for one-off plugins, but it could get super-hairy in complicated cases. For example, let's say you are a cloud-compute vendor — or you work for one — and we're going to call it HappyCloud. You want to integrate HappyCloud with Packer to make it easier for your users to use and create HappyCloud images. 

But you have a lot of components to write for HappyCloud. You want to write a builder — or maybe three or four — depending on how complex your cloud platform is. You want to write one of the new HCL-exclusive data source components, allowing users to set a variable based on an API call or other custom function. And you want to have import and export post-processors for moving images into and out of your cloud.

 

To use your HappyCloud tool, you're going to find out your Packer users have to install half a dozen Packer plugins to make their Packer component work — one for each builder, data source, and post-processor that they want to use as part of their Packer build. Faced with the prospect of creating, maintaining, and forcing your users to install all those different plugins, you might be feeling like rebranding.

 

Historically, we've vendored everything everyone could want from all the major cloud providers into the same Packer binary to solve this issue. But that means a binary that started small, over the years, has gotten bigger and bigger and bigger — to the point where if people want to use the HappyCloud builder and the shell provisioner, they're not just installing those components. They're also installing every single component written for Amazon, Azure, Google, and all supporting libraries for each of these plugins.

»Multi-Component Plugins 

To solve this headache for our community, we've decided to totally re-architect plugins to serve more than one component from a single binary. This allows people to fire up different Packer components and bundle them together based on the technologies they interact with rather than on the interface they fulfill. 

Instead of having a dozen HappyCloud builder components, we would have one HappyCloud component, one HappyCloud plugin — and from there, you'd be able to install one Amazon plugin, one Google Compute plugin, one Docker plugin, but only if you need them. You can think of multi-component plugins as analogous to Terraform providers, which are often logically grouped in the same way.

We've also created a scaffolding template to make it easy for our users to create new Packer plugins for vendors like HappyCloud, and the scaffold doesn't just give examples of how to write and architect plugins. It also provides a GitHub action workflow that enables users to standardize the plugin release process across the community — and that's super handy now that it's time to talk about what our users are going to gain from the new plugins, and that is a simple, easy installation process via the new CLI command, packer init

»Packer Init

Instead of writing the code, building it yourself, and adding it to your Packer directory, l you only have to add a required plugins block inside of your HCL template and call packer init. All of the install and download of the required plugins are handled in your current working directory, and then Packer will say, “I know what you need. I'm going to go up to GitHub and pull down the correct version for your operating system.” That might sound like magic, so let's talk about how it works. 

Instead of requiring plugin maintainers to add their plugins to a centralized repository, all we're doing is treating GitHub as the registry instead of using a registry like the Terraform registry. As long as plugin binaries are released with the correct names on GitHub, Packer can find them and pull them down from GitHub for you. This explicit specialized naming convention allows Packer to grab a binary for the right release version and the right operating system architecture for your build.

 

In future, we want to add support for other domains so you can download from wherever you want, but we're hoping that for now, the GitHub use case will cover most of everyone's needs, and manual downloads will fill the gaps where that doesn't work out. 

»Releases Are Just a Tag Away

Those specialized naming conventions are why the release GitHub action that I alluded to earlier will be super-useful for maintainers. It handles all of the specialized naming for you, in addition to making sure that when you're building the plugin, that building is absolutely trivial because — from a maintainer perspective — all you have to do is push a semantic versioning tag up to GitHub and the GitHub action will handle everything else.

 

»Remote Docs on Packer.io

There's one other issue that Packer has had with community plugins, and that's discoverability. It was important to us when we were redesigning all of this workflow that the official plugins were documented just as well as the community plugins — which we think are equally important. We wanted to make sure that people who want to create and use community plugins can find them just as easily as they can find the official ones.

 

So we've had the same Packer scaffolding repository set up a remote documentation repository. That means the Go releaser action will create a zipped document folder that the Packer website can retrieve at build time, and that means the official plugins live side-by-side with the community ones, documented equally on the website.

 

Even if they're not bundled with the main Packer binary, all plugin maintainers have to do is add a few extra lines of configuration to the Packer website core code, and that'll tell Packer where to look for the remote docs that need to be surfaced on the site. 

This means plugin discoverability will be dramatically improved for our users. But you’ll also benefit from being able to pin plugin versions to make sure that Packer chooses and downloads the correct plugin for your particular template's needs — which will hopefully prevent any nasty surprises if you haven't run your Packer build in a while and someone decided to upgrade in between.

»Packer init Will Be Required in 2.0 

You can already use packer init and multi-component Packer 1.7. They're compatible with the 1.7 release series. Not only that, but over the last couple of months, the Packer team has been hard at work extracting almost all of the plugins that were historically bundled with the Packer core into their repositories. They're already extracted. 

The reason you haven't noticed that — even if you're in the Packer 1.7 release series —, is because we're then vendoring those plugins back into the Packer core. That means the Packer core still has those bundled in its own way.

 

But as of the 2.0 release series, we're going to be removing those from the plugin core., That means packer init will be required in version 2.0. We're going to stop vendoring in all those external plugins, and packer init will be the way to retrieve community binaries and all the official binaries maintained by HashiCorp currently that used to be bundled in with Packer and live in their repositories. 

packer init is optional right now, but it will be required — and we don't take that lightly. We understand that implementing new workflows can feel disruptive, and we are here to help. We want to make sure that by 2.0, it's as easy as copy/paste to get the initialization configs up and running and to get Packer builds chugging along as they always were. And, of course, if you're in the Packer 1.7 line and you're happy, there's absolutely no rush for you to upgrade.

 

One reason it's important for us to ensure the 1.7 release speaks the language of multi-component plugins — rather than releasing multi-component plugins as part of 2.0 — was to make sure you have this long leisurely window to start using packer init at your own pace.— to figure out how it works without worrying about the other major changes coming down the pipe. And we want you to be able to do this now to download externalized plugins. 

»Requiring init Will Be a Good Idea

Here are a few reasons why, even though it’s not necessary now: First off, it makes sense to enforce plugin versioning. Making sure you know what plugin version a particular set of Packer templates uses means you're not going to have those nasty surprises. It allows for easier auditing of the tools you depend on. and it explicitly documents what you're using, where you're installing it. 

Third, it makes the Packer binary much smaller, which means that we aren't installing a ton of packages you don't want or use — Especially for our users who are doing things like downloading Packer as part of the CI run. This is hopefully going to make their CI runs a little bit less hairy, a little bit easier, a little bit less flaky.

 

Finally, moving everyone to this mechanism makes it much easier for community members to create plugins that are accessible by the community. It means all users are, by definition, comfortable with installing external plugins. That means all community members are going to be willing to install external plugins. And since these plugins will be external —‚ whether they're official or community-maintained — it's no extra lift for users to adopt community pieces. I think that's going to be a big deal.

»Manual Installation Will Still Work 

You can also install plugins the way you always have. Manual installation will still work in the situation where you cannot use packer init for whatever reason. 

If you end up doing that, you'll just omit the required plugins block that I showed earlier. You do have to make sure that the multi-component plugins you're installing are compatible with version 1.7 or later. All of the details about manual plugin installation options can be found in the plugins section of the docs website.

»HCL Templates 

The next major change in version 2.0 is that Packer will support only HCL templates moving forward. We've given talks in the past about why we're excited for HCL. This talk doesn't have the time to be an HCL template tutorial. But the good news is that we have put a lot of work into creating those tutorials, and you should be able to find the resources you need.

 

You can use the hcl2_upgrade command to upgrade your JSON templates to HCL templates today. We recommend doing this as soon as possible to get a feel for the new format. Again, there's no rush in doing it, but doing it now before it's required will make your life much easier. 

If you run into edge cases where the hcl2_upgrade command doesn't work as expected, we would love for you to reach out to us for assistance on the GitHub issue tracker so we can make sure that command works — not only for your use case — but for everyone else's as well.

»Five Reasons You’ll Love HCL

Let me walk you through a couple of reasons why I think you're going to love HCL templates to entice you to upgrade before Packer 2.0. The first is that the HCL format supports packer init and the associated required plugins block, whereas the JSON templates don't. 

Second, HCL2 templates parameterize builder configuration in a way that allows you to reuse builder configurations across multiple builds. This means you can store a build configuration across multiple files, which allows you to reorganize and reuse Packer templates in a way that works for you and your particular organization's use of Packer. 

Third, HCL templates support comments using a variety of syntaxes anywhere in the template, which is something that JSON templates simply don't.

 

Fourth, HCL supports variable types other than strings. It supports lists, numbers,  and maps. Anyone who's tried to use a JSON template to parse in an array understands how huge a headache that is right now — and how much better it is to create a variable that can immediately map to an array type input.

 

Finally, HCL templates get all the HCL language functions for free — enabling all kinds of sophisticated template string parsing, numeric functions, date-time stuff, filtering, and lots more. You're going to be able to have a lot more dynamic use of your Packer templates — and hopefully a lot less hard coding of variables — or a lot less complicated scripting around your Packer build to get it working exactly how you want it to.

 

It's outside of the scope to give you a run-through of all the Packer HCL templates, like I said before, but there are a few great resources to help you get started. The first one, this is a link to the HashiCorp Learn website. That'll walk you through all the major aspects of Packer functionality in an HCL template written from scratch.

 

It'll show you how to use packer init, how to use required plugins, how to create variables, how to do all kinds of only-and-except filtering. Everything t you know and love about Packer currently, we have a tutorial on Learn to tell you how to do it in HCL. And finally, here's the exhaustive HCL reference documentation on Packer's website. If your question is not answered in that reference documentation,  open an issue, and we will write a new doc to make sure your question is answered.

»Sneak Peek 

Now we've finished talking about those two major pieces of Packer, I want to give you a little bit of a sneak peek into some of the other things that we're thinking about right now. One of the major benefits of doing these two big architectural changes is that we remove many limitations that came with supporting both the JSON format and the HCL format. The new plugin structure also gives us a lot more flexibility. 

In the future, supporting more modular things will be a lot easier for us, and it will give us a much more flexible template style that has a lot of opportunities for enhancing your experience. Here are a couple of things we're thinking about.

»Multiple Communicators Per Build?

The first is — and this is not officially on our roadmap yet, but it's definitely in our minds as we think about what major improvements we want to make on a post-2.0 Packer — we're thinking of having multiple communicators being able to configure per build. Having that happen so you can reconfigure your communicator at different build stages will hopefully help resolve some of the issues that we've seen with users when they're trying to harden Packer images for security purposes — or when they have to reboot and they accidentally lose the IP address that the communicator was trying to use to connect before.The way the JSON template built is architected makes this hard to implement. But we think it's going to be a lot more manageable once we have to support only HCL templates.

 

»Reusable Provisioner Configuration

This also isn't on the roadmap yet. But the reason I'm saying these things — even though they're not official — is because you're looking at my personal wishlist for the tool right now. And as the lead engineer for the tool, my wishlist tends to turn into the roadmap a little bit down the road. 

The next thing I want to bring up is reusable provisioner configuration. Having reusable provisioners makes a logical next step,  as we've already introduced parameterized builder sources with HCL. And with builders being reusable and provisioners being reusable, that allows us to have a dramatically more concise and modular set of Packer templates available to you. That will hopefully save a lot of users from a lot of copy/pasting nightmares. This is definitely something that we want to implement during the 2.0 series.

»Image Pipelining and Provenance Tools

The final thing is on the official roadmap: we're working on an image pipelining tool for Packer in collaboration with the HashiCorp Cloud Platform team. I don't know if you got a chance to see my brief guest spot in the keynote earlier — but if you didn't, I'd recommend watching it. The tl;dr is that Packer will be able to reach out to the HashiCorp Cloud Platform to keep track of your image's metadata in a centralized location. And that data — for example, data about your most recent build — is going to be accessible to Packer and even to Terraform via data sources. This will allow you to remove whatever glue scripts or operator copy/pasting you're currently doing to try to link the two.

 

This will streamline the Packer Terraform interface and make Packer iterative build pipelines much easier to track. You'll be able to track your image versions, link those image versions to GitHub shards if you want to, and just generally be able to improve visibility around your image factories. We're excited to give this to you — either as part of 2.0 or even potentially slightly before — albeit as an HCL-only feature.

 

Thank you for spending some time with me today and for humoring me as I shared a little bit of gossip around what we're going to be doing with Packer coming soon. 

I'm excited for the next year for Packer, and I'm excited to see what awesome things the community comes up with now that we've implemented some actual enablement and laid the groundwork for you to be able to thrive more. 

If you have any questions you can reach out to us via the GitHub issue tracker or the HashiCorp Discuss platform. And if you have any bug reports or feature requests, the GitHub issue tracker is being paid attention to by my amazing team, and we would love to hear from you. Have a great day.

More resources like this one

2/3/2023Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

1/20/2023Case Study

Adopting GitOps and the Cloud in a Regulated Industry

12/31/2022Presentation

Golden Images and How To Create Them

12/19/2022Presentation

The Packer Roadmap — HashiConf Global 2022