Secure Kubernetes Deployments with Vault and Banzai Cloud
The following is a guest blog post from Nandor Kracser, Senior Software Engineer at Banzai Cloud. Banzai Cloud is a young startup with the mission statement to over-simplify and bring cloud-native technologies to the enterprise, using Kubernetes.
At Banzai Cloud, we are building an open source next generation platform as a service, Pipeline - built on Kubernetes. With Pipeline we provision large, multi-tenant Kubernetes clusters on all major cloud providers and deploy different workloads to these clusters. We needed to find an industry standards-based way for our users to publish and interact with protected endpoints and at the same time provide dynamic secrets management for all the different applications we support, all these with native
Kubernetes support. After several proof-of-concepts, we chose HashiCorp Vault. In this post we’d like to highlight how we use Vault and provide technical insight into the available options.
Pipeline API and Vault
The primary interaction with Pipeline is through a RESTful API (CLI, UI use the API), and we decided to secure it using OAuth2 with JWT bearer tokens. The main benefit of OAuth2 is that we don’t have to store any user credentials and our users can use their existing accounts with their preferred provider. Also, JWT tokens are self-contained and allow stateless authentication, with access to protected resources based on scopes. Nevertheless, these tokens can’t live forever and need to be revokable and whitelisted, work together with Kubernetes, and able to be pushed to different cloud providers in a multi-tenant way. For our use case, Vault was a very nice fit.
First of all, we use Vault’s Kubernetes Auth Method integration to authenticate with Vault using a Kubernetes Service Account Token. Vault is used by Pipeline to lease the ServiceAccount
JWT tokens, enable all other applications running in the same Kubernetes cluster to call Vault, and use tightly scoped tokens with various TTLs. These revokable and whitelisted tokens are stored in Vault’s Key/Value Secret Engine. Every time a user interacts with the Pipeline API, we check these tokens using Vault's built-in cache, thus performance is not affected.
In this particular scenario for storing the OAuth2 JWT bearer tokens we use the Key/Value Secret Engine, however we have also integrated and are using several other pluggable Vault engines: for Secret Engines we use Databases to generate the dynamic credentials and SSH for dynamic SSH into hosts and for Auth Methods we use Kubernetes and GitHub.
For an overview of how we use Vault as a central component of our Auth flow, please check this diagram below.
For further technical details of securing an API deployed to Kubernetes with Vault, read our post Authentication and authorization of Pipeline users with OAuth2 and Vault.
Dynamic credentials
Once the inbound API calls are secured using JWT Bearer tokens, lets see how Pipeline deploys applications to Kubernetes with credentials. These applications, and the cluster itself, are dynamic, scaled, removed, or re-scheduled based on different SLAs. One thing usually in common across clustered applications is they are interacting with other applications to exchange (sensitive) data. For the sake of simplicity, lets look at this from the perspective of applications connecting to databases. Connecting to a database almost always requires passwords or certificates, thus the user has to pass these down to the application code through configuration. First of all, handling credentials manually and storing in configurations, files, etc, is less secure. Second, we try to educate and push our end users towards more secure solutions where they will never have to pass these credentials. All our deployments are orchestrated through Helm charts, and unfortunately we have seen many times that credentials were generated or passed into charts during deployments.
Since we already have Vault as a core part of Pipeline, and Vault does support dynamic secrets, we decided to add support and make dynamic secrets an out of the box solution for all our supported deployments. The advantages of using dynamic secrets are already described in Why We Need Dynamic Secrets, a great blog post by Armon Dadgar. In a nutshell, to harden security each application gets a dedicated credential towards the requested service, this credential only belongs to the requesting application and has a fixed expiry time. Because the credential is dedicated, it is possible to track down which application accessed the service and when**.** It is easy to revoke the credential because they are centrally managed with Vault.
Since Pipeline is running on Kubernetes, we can apply Kubernetes Service Account based authentication to get the Vault tokens first, which we can later exchange for a MySQL credential (username/password) based on our configured Vault role. Please see this diagram for further details about the sequence of events:
As you can see with this solution, Pipeline was able to connect to MySQL simply because it is running with the configured Kubernetes Service Account and without being required to type a single username/password during the configuration of the application.
The code implementing the dynamic secret allocation for database connections and Vault configuration described above can be found in our open source project Bank-Vaults. For further technical details of using dynamic secrets for applications deployed to Kubernetes, checkout our other post.
Storing cloud provider credentials
Pipeline is built on Kubernetes and it’s cloud provider agnostic. We offer support for AWS, GKE and AKS (with a few coming soon to be GA). In order to push the K8s clusters and the applications to the cloud, we need to do quite a lot of cloud provider interactions using their APIs and we need certain cloud provider credentials and roles to do so. This is a very delicate matter, since the end users have to trust us that we are storing these credentials in a very safe way, while providing users full control over when and how these credentials are revoked. Storing these credentials or roles in a database did not give us enough confidence, so we decided to use a system specialized for storing secrets - again, welcome Vault.
Dynamic SSH credentials
Now once we push these applications to different providers we are giving full enterprise support for our end users. Again, note that these clusters are fully dynamic, many times they are hybrid ones and the VM’s underneath are changing. The majority of the clusters we provision are based on spot or preemptible instances, thus the dynamics of changing VM's is high. We have a system called Hollowtrees, for securely running spot instance based Kubernetes clusters where we are closely following the state of the spot instance markets. We react to spot instance terminations, and sometimes we replace instances with other ones with a better price and stability characteristics. Having static SSH keys to access these clusters (especially that they can’t be dynamically revoked) is not an option for our customers and us. At the same time, we still need to access the VMs underneath Kubernetes for debugging purposes. Since many developers are accessing them, and the VMs are coming and going every minute, we have to distribute access in a very dynamic manner. For this purpose we decided to use Vault's SSH Secret backend, which does dynamic Client Key Signing for accessing our remote VMs.
As you can see, Vault has several backends and engines already available, and with simple configurations and some code, most of the security features required by enterprises can be quickly implemented. For the sake of simplicity we end this post here, however we will keep on posting about how we seal and unseal Vault and some other advanced scenarios in our forthcoming posts. Meanwhile please check out the open source code on our GitHub and make sure you go through the great tutorials and examples of Vault.
Sign up for the latest HashiCorp news
More blog posts like this one
HCP Vault Dedicated adds secrets sync, cross-region DR, EST PKI, and more
The newest HCP Vault Dedicated 1.18 upgrade includes a range of new features that include expanding DR region coverage, syncing secrets across providers, and adding PKI EST among other key features.
Fix the developers vs. security conflict by shifting further left
Resolve the friction between dev and security teams with platform-led workflows that make cloud security seamless and scalable.
HashiCorp at AWS re:Invent: Your blueprint to cloud success
If you’re attending AWS re:Invent in Las Vegas, Dec. 2 - Dec. 6th, visit us for breakout sessions, expert talks, and product demos to learn how to take a unified approach to Infrastructure and Security Lifecycle Management.