Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
Demo

Vault & Kubernetes: Better Together

Watch Jason O'Donnell from the HashiCorp Vault Ecosystem team demo the Vault Agent Injector using static secrets, dynamic secrets, and encryption-as-a-service.

Vault seamlessly augments native Kubernetes workflows by providing stronger baseline security and interoperability. In this talk, Jason will present the newest features of vault-helm and vault-k8s to demonstrate best-in-class techniques for lifecycle management of Vault as well as dead simple integration of any application running on Kubernetes with Vault.

Transcript

Hi, I'm Jason O'Donnell. I work here at HashiCorp on the Vault Ecosystem team and I lead our Kubernetes and Vault integration projects. This talk is "Vault and Kubernetes: Better Together."

Very quickly the agenda here, and this is mostly a demo driven talk, but I'll go over a few things before I start some demos. First, I'll talk about Kubernetes secrets, what they, are and some of the advantages of using something like the Vault Agent Injector, which is a solution by HashiCorp for consuming Vault secrets within Kubernetes. Then I'll do three demos of the Vault Agent Injector.

  1. Using Static secrets

  2. Using Dynamic secrets

  3. Using transit encryption in Kubernetes

Understanding Native Kubernetes Secrets: Pros & Cons

So Kubernetes secrets. This is an example of a Kubernetes secret. If you have any experience with Kubernetes you've seen this before, but this secret is called mysecret and it has just one secret value in it. It's a password and it's a Base64 encoded password. Kubernetes secrets require Base64 encoding if you create it this way.

What are the advantages or disadvantages of using Kubernetes Secrets?

They're Static

First, this Kubernetes secret is a static secret. Some operator has set this up for you ahead of time. This means that if the password were to change, somebody would have to go in there and update it. Or if your application has been deleted, an operator would have to clean that up.

They're Base 64 Encoded

Next it's Base64 encoded like I said, and the example showed that. What this means is that if you're not using an encrypted etcd storage solution, these secrets are only Base64 encoded, which is not really protected. Access to the secret is not encrypted and it could be leaked if the etcd backend was compromised.

They Support Updates, but Can't Signal That a Secret Has Changed

These secrets do support updates so you can imagine my password changes over time. It's good to update your passwords. But how does my application know that that secret has changed? Kubernetes will refresh the secret for you, but there's no mechanism built in for say, signaling my application to say the secret has changed. You can roll some of these features yourself such as doing checksum and then bouncing a Pod or deleting a Pod and then re-creating it. That seems—especially in the high availability world—that's not usually desired for a lot of applications.

No Leases

Finally, there are no leases or time-to-live (TTL) with these secrets. As I mentioned, if my application were to be deleted or removed from production, the secrets, if they weren't cleaned up by an operator, would still be out there. The exposure of say, a database credential is a non-zero chance.

The Vault Agent Injector

The Vault Agent Injector. The Vault Agent Injector is a mutating admission web hook. What this means is that there is some piece of software running in Kubernetes, and Kubernetes sends events to it and the web hook can look at those events and make decisions or change things. In our case, what we're going to be doing is injecting Vault Agent containers into Pods. We look at your Pod and if there are certain annotations there, we will basically mutate that Pod spec to include a Vault Agent init container and/or a Vault Agent runtime container.

What this allows us to do is to render secrets to paths in a shared memory volume. Basically you ask for a secret and then you might give us a custom template and how to render that secret, and we will render it to a path that's shared on disk in a memory volume.

Vault Agent also does things like renews the secret and the tokens associated with those secrets. So if you have a secret that can be renewed, Vault Agent will automatically update that secret so that it doesn't expire. And if it changes it will also render that template. And then the tokens associated with these secrets or leases will be updated over time.

Now this agent needs to be configured. There's lots of different use cases for the agent. There's lots of different secrets you might consume and different templates. And the configuration for the Vault Agent injector is done through annotations.

Vault Injector Event Flow

So very quickly just showing how the events happen in Kubernetes: Say you're creating some Pod, you send that to Kubernetes, you check it in. It does an authentication to make sure that that request is valid. And then it sends it to the mutating admission webhook.

In our case, we take a look at the request, we find those certain annotations, we then inject or change the Pod spec to include the Vault Agent container and the sidecar container. We then check that back into Kubernetes, Kubernetes validates that and it moves it along the line to where it eventually gets deployed.

So here's an example of annotations that configure the Vault Agent Injector. And this is just a small set of the configurations available:


spec template: metadata: annotations: vault.hashicorp.com/agent-inject: "true" vault.hashicorp.com/agent-inject-status: "update" vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/db-app" vault.hashicorp.com/agent-inject-template-db-creds: | {{- with secret "database/creds/db-app" -}} postgres://{{ .Data.username }}:{{ .Data.password }}@postgres:5432/appdb?sslmode=disable {{- end }} vault.hashicorp.com/role: "db-app"

First we're saying Agent Inject equals true. This means that we want to inject Vault Agent containers. Next we want to request a secret database/creds/db-app. And at this location, I might have something like a PostgreSQL username and password to connect to a PostgreSQL database. And you can see, I have this agent inject secret for this specific secret db-creds.

So this template I've created here will create a Postgres connection string and Vault will automatically fill in the blanks for us. So it will generate a username and password by connecting to a Postgres database and then generating and creating an account in Postgres for you. And then finally the name of the role within Vault that we want to use, which has the policies attached to it to even access the secret.

Demo: Static Secrets with the Vault Agent Injector

As I mentioned earlier, I'm going to be giving three demos. We'll be injecting static secrets, we'll be injected dynamic secrets and we'll use the Vault Agent Injector for things other than secrets, such as encrypting and decrypting values using the transit secret engine. Vault was already running and pre-configured for this demo. You can see here that I have two Pods. I have a vault-0 Pod, this is the Vault server, and I have a Vault Agent Injector. This is the mutating webhook that will be mutating Pod specs to include Vault Agent containers.

Next I configure Vault with the policy, the secret engines, and the auth method that my apps and demos are going to use. So I've created a policy called App and we can look at that policy very quickly. This policy contains access to all the secrets that all of my demos are going to be using.

So here we have the static secret/HashiConf, the database credentials which we'll be using in the dynamic demo, and then access to transit paths for encrypting and decrypting data in the third demo.

Next I enabled the Kubernetes auth method. This allows Vault to authenticate Kubernetes users using their service account. So I configured Vault to talk to the Kubernetes cluster. I gave it a token reviewer, JSON web tokens so that Vault can authenticate users who are authenticating with Kubernetes service accounts. I told it where Kubernetes lives. And then finally I gave it the Kubernetes CA certificate so that Vault can verify the Kubernetes cluster, the Kubernetes server mTLS certificates.

Next I created a Kubernetes Vault role that maps the Kubernetes service account to a Vault role with a policy. So I have a Kubernetes service account named app, and that lives in the app namespace. And I'm going to attach the app policy to this role. This is what my demo will be using when it logs into Vault.

So for the first demo, we'll be doing static secrets. So I enabled the KV secret engine and I put a simple value, key value at secret/HashiConf and there's a key value there, HashiConf equals rocks. So let's inject this secret. So I have a web app here that will basically read this key value from some place on the disk and then display it in a web app.

It's a very simple application. It doesn't talk to Vault at all, all it does is read a file at a certain path. And we specify that path here with an environment variable. As you can see though, this only has one container defined in it, just my app container. We'll be patching this deployment definition with the necessary annotations to add default agent containers. As I mentioned earlier, we'll be using the app service account that's been configured here.

Let's start this Pod. We can see down here below that my app's Pod is running. But it says 1 out of 1 ready. Currently we only have one container to find, our app container. There are no agents currently running in this container. If I look at these patch annotations, we're going to add the following annotations to this deployment. Now these annotations could be added in the definition itself so that when we create the Pod it just automatically injects these, or we can patch them like I'm doing here on the fly.

To enable the injection, we're going to set agent inject equals true. We're going to say we want this secret/HashiConf and we're going to be identifying the secret using the KV secret name here. We're going to attach a custom template that tells the Vault agent how to render the secret in a specific way. So using the KV secret key name here, we're going to attach this following custom template. So at the secret HashiConf path, whatever data you find there, attach it to adjacent object and then render that to disk.

We'll be using the Vault role app, which is what was configured earlier. And then Vault is protected using TLS so we're going to give it the name of a secret, where the CA certificate can be found so that our Vault agent can verify Vault's certificates. So let's inject this.

I can see down below here that there's now two pods. Our old Pod was deleted and our new pod has come online. It says two out of two this time. So there's at least more than one container. Let's take a look at this. So some things have changed since our initial deployment. We now have an init container, a Vault engine init to be exact. This pre-populates the shared memory volume which was mounted to all the containers in this Pod with the secrets that were request requested. This is helpful when our application needs secrets at startup and can't wait for the long running agent container to populate the disk.

You can see here that a volume was mounted vault/secrets. This is the memory volume where our secrets will be rendered to and then the Vault TLS volume this is where our CA certificate can be found so the Vault agent can use that to verify Vault's service certificates.

We see our app container here. Nothing has really changed except we've mounted the shared memory volume. This is where the secrets will be found. And we configured this beforehand using this environment variable saying we expect the secret vault/secret/kv-secret is where our app secret will be found. And then finally we have our Vault agent container. This is a long running Vault agent that keeps our secrets updated and renewed. This is very similar to the init container, except it doesn't exit after it renders the secrets, it keeps running.

So if we port forward to this container now and if we navigate to this, I could see here that my web app has found the secret on disk at vault/secret/kv-secret and it found the value of HashiConf equals rocks. And if we exec into this container and I go to that location I could see that we have a file here called kv-secret and we could see in this file there is a "hashiconf": "rocks" JSON object. So static secrets are nice but they don't really change over time. Vault doesn't generate them. They've been configured by an operator so that you can consume that secret. And it's really meant for things that don't really change very often.

Demo: Dynamic Secrets with the Vault Agent Injector

Now the power of Vault comes from dynamic secrets. Vault can generate credentials for you on the fly if given access to things like databases. So I've already set up Vault with a database secret engine, specifically Postgres. So with here you can see we enabled the Vault secret engine for databases and we configured it to talk to a Postgres database. We have already set up a role within Postgres that allows Vault to manage users in Postgres. And we talk about how it can connect to the Postgres database. This is currently a service called Postgres in the Postgres namespace.

Next we set up a role for the database engine that we give our service account access to. So we're going to call this role db-app and this is the SQL that Vault will run every time we try and fetch these credentials. So it's going to create a role with some randomly generated name with a randomly generated password, and that's going to expire at some point depending on the TTL settings of this role.

We're going to allow it to access our database. We're going to allow it to access an application schema. We're going to give it permissions on that schema such as the ability to create, to read, to write, to update tables within that schema. And once again, our policy as I showed earlier, allows us to read this secret. So we expect that our application can read database/cred/db-app and this is where it'll get its login string from.

Same as before, this app looks very similar, but it's a little bit different. This time, this web application is going to connect to a database, a Postgres database, and it's going to read some data from tables and display them. So same as before we're using a service account named app. This is our secret engine app container. We tell which schema that we're going to be getting our information from. So the schema where our tables live and then where on disk that secret is going to be found. So vault/secret/db-creds.

Now there's one important thing here. We're going to tell this container to run as the user 100 and the group 1000. And this is done so that every single time these credentials change or have expired, the agent can automatically go out and grab new ones and then send a signal to our application to say, "Hey, reload, your configuration file. Things have changed." If your application can handle that kind of logic, it allows the secrets to change over time and then for the Vault agent to tell your app, "Hey, your secrets have changed, please reload."

The reason why we're running this as the user 100 and the group 1000 is that the agent and my application container are going to be running as the same user. And they're going to be using the same namespace for the processes. This allows the Vault agent to actually send signals to my application container.

So same as before, we're going to be adding these annotations on the fly. We're going to say inject. We want to run this as the same user as our application container, in this case 100 and 1000. We're going to say we want the secret from database/credit/db-app. We want to use this template to render these secrets to a Postgres connection string so that my app can just use this and automatically connect. So whatever username and password you find at this location, render it into this Postgres connection string so that I can connect to the Postgres database running in Kubernetes.

Next, every single time the secret expires and we get a new one, run this following command that sends a sync up signal to my application which has the process named app. We're going to be using the role app once again and then the TLS certificates so that we can verify Vault. So let's run this. Now let's patch it.

You can see down below we only have one container so this is our app container without any injected agents. I can see now that's 2 out of 2. If we inspect this again and see once again we have a Vault agent init container and a Vault agent sidecar container with a shared memory volume between all of the containers.

If I port forward to this container again, the app looks a little bit different. This time I've connected to a Postgres database. This is my randomly generated username. This is a Postgres 12.3 server. And it just read this data from a table found in that database. So in this case it's just a table of different kinds of coffee.

Now if I refresh this app, we can see that the username changed. This is because the username or the credentials that Vault generated had expired depending on the settings that we set up earlier, when we configured the role for the database engine. Every single time that these change, our app gets a signal and automatically updates it and its configuration within the application.

If I keep refreshing it's staying the same. And at some point these credentials are going to expire, our app will get a signal, and it will automatically update its configuration. So my app didn't crash, it just updated on the fly depending on how often this username and password has changed.

If I exec into this container and I go once again, to Vault secrets I can see I have a file here called db-creds, and it has a Postgres connection string with a username and a password and how to connect to 2 Postgres. And if I look at the process tree here I can see these are all sharing the same process namespace. And so I can see my app process and I can see the Vault agent running.

Demo: Using Transit Encryption in Kubernetes

We showed your static secrets and we showed dynamic secrets, but Vault does other things such as encryption as a service using the transit secret engine. Once again this has already been set up. You can see here at the very bottom of this configuration script, I've enabled the transit secret engine. And I've written a key. I've told Vaut to generate a key at this location, transit/key/app. This is the key that Vault will use in order to encrypt and decrypt data. And I've already configured the policy to read the following path: transit/encrypt/app

This is what my app will use to encrypt values it sends along the Vault, and this is the path that my app will use to decrypt values that were already encrypted. My app doesn't have to worry about managing any keys, Vault does that automatically for me.

There's just a few different annotations this time, but first we'll look at the Pod. We have this deployment, it looks the same as before. We have one container. This time it's using a different image for the transit example. It's also going to connect to a Postgres database. It's going to read username and passwords for some table in Postgres, encrypt the passwords, and then decrypt the passwords using transit.

So very similar as before our service account is named app, we're sharing the process namespace. The difference though is with the annotations. So this time we've added 2 different annotations. Since our application is going to actually be using Vault's transit service, it's semi- Vault aware. It knows how to talk to Vault, but it doesn't need to worry about things like logging in or renewing secrets or tokens.

All it really needs to do is say, "I'm going to try and decrypt this value by sending it to some Vault URL." And the Vault agent, using two different annotations here, has enabled a listener so that containers within this Pod can now use that container to proxy-request the Vault without having to use tokens. The agent is already configured to log into Vault and use its token, which already has access to the transit paths. So my app, although it's semi-Vault aware, doesn't need to worry about the specifics of Vault. All it needs to know is if I asked this agent container running on localhost to encrypt this and decrypt this data, I don't have to do anything else other than just send values to the agent.

This is very nice because renewing tokens and fetching secrets can result in a lot of logic for my application. All I really have to do is just send data to this agent without worrying about anything else.

So same as before, we're using a Postgres database running connect to Postgres but this time we're going to enable caching. So Vault agent with caching enabled will automatically renew and fetch secrets for you. So if my application was trying to fetch a secret really often, the agent wouldn't actually send those requests to Vault, it would just give whatever is cached. And when that secret changed, Vault agent will go out and get it and cache it so that those requests automatically get cached and returned to the user without them even knowing. So this takes a lot of pressure off the Vault server.

Then we're going to say that we want for requests coming from within this Pod use the auto auth token that my Vault agent already has enabled. So my app container doesn't actually need to do a login. It doesn't need to send along a JSON web token to authenticate the Vault. The agent has already configured itself to do that. Then same as before we're running the same user as our container, we're getting a Postgres connection string, we're sending signals when that secret changed using the app role, and we're getting these certificates from this secret so that we can verify Vault server certificate.

So let's run this. We'll just patch it with these annotations. Same as before we can see we have 2 out of 2 containers running. This whole Pod has been destroyed. So now if we port forward to this. I could see here at connecting to a Postgres database using this username to Postgres 12.3 server and I've read values from some table that had a username and passwords.

For the passwords I encrypted those using Vault transit encrypt app. And these are the encrypted values. These could be now stored in a database or some other storage solution. And then my application all it did just to demonstrate this was take those encrypted values and then send them back to Vault using the decrypt path to get back the actual plain text values. So my app didn't have to think about what key is used to decrypt these, Vault does all that for me.

Wrapping Up

So that's my talk. If you want to know more such as installing Vault and Kubernetes and managing Vault, we have a project Vault Helm and it can be found in HashiCorp/vault-helm. And if you want to know more about the agent injector, there's the Vault k8s project. Thank you.

Thank you very much Jason. That was absolutely...

More resources like this one

4/11/2024FAQ

Introduction to HashiCorp Vault

Vault identity diagram
12/28/2023FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

3/14/2023Article

5 best practices for secrets management

2/3/2023Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones