Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
Presentation

Zero Trust Security with HashiCorp Vault, Consul, and Boundary

HashiCorp has a clear vision for the future of cloud security, and it is founded on zero trust security and identity-driven controls.

There's a lot of buzz around the idea of Zero Trust networking and security, but how do you actually implement this? What products would you use in that toolchain and pipeline?

In this presentation, HashiCorp shows you our vision for identity-driven security and networking controls that can help bring the world to the safer cybersecurity posture of "Zero Trust" — this is largely driven by the emergence of cloud security (or lack thereof), ends the era of total reliance on "castle and moat" security architectures, where we assume nothing gets past the perimeters and internal firewalls.

We present to you, three free and open source products — HashiCorp Vault, HashiCorp Consul, and HashiCorp Boundary — These are cornerstones on which you can build a modern, Zero Trust security architecture for the multi-cloud and hybrid cloud era.

For more information after the video, visit HashiCorp's site on Zero Trust Security.

Transcript

Mitchell Hashimoto: When HashiCorp talks about its mission, we talk about making it more efficient to take applications into the cloud. When we talk about that, we talk about the four pillars of provisioning, secure, networking, and runtime.

When we look at these four pillars, we have our software alongside next to it. These are the important things for organizations to look at and the changes between on-prem environments and cloud environments. Or more generically, traditional environments that are more static and dynamic, more modern environments.

Static to Dynamic

Today, we're going to focus on the security layer of this diagram. Thinking through that transition from static to dynamic for security, these diagrams give you an idea of what we're going to be talking about here.

On the left, we have the more traditional static approach. You can notice that there's a four-wall perimeter approach to this. We'll talk more about that in a second. On the right, you see a more modern approach, which lives across multiple platforms — potentially multiple cloud platforms — and with dozens of applications that need to communicate to each other securely.

Migration to the Cloud

So, diving in with more detail of this transition from static to dynamic. On the static and the traditional side, we often see security practices focused on perimeter-based or networking-based security.

In this model, you have the idea that on your network you tend to have four physical walls — potentially your datacenter — and all the stuff inside is trusted, while the stuff outside is generally untrusted. In between, you have something like a firewall protecting that. Within your environment, we usually use IP-based security to prevent or allow access between different internal services.

In a more modern approach, this doesn't work super well. In modern environments, you tend to not have a physical structure — such as the datacenter that you own — to provide that security. It's much more of a software-based security model. You deploy things into software-based networks that you don't control the physical resources to.

This requires a slightly different way of thinking. We have to think more about dynamic access because creating new services, new servers — bringing online new components of your infrastructure — is just an API call away. We have to get used to that dynamic access part of security that you tend not to have as much in a traditional static world.

The other thing is moving away from perimeter and IP-based security to more of an identity-based security approach. You have to do this because the perimeter is mostly gone. And as you're talking between multiple platforms, the network connectivity and the IP control might not be there. Identity-based security makes the most sense.

As applications have evolved, we're moving away from one application per VM — or one application per IP — type of approaches and more into a multiplex scheduled workload environment with tools like Nomad and Kubernetes. That even pushes it further towards this identity-based paradigm.

Multi-Cloud Security in a “Zero Trust” World

Often, when we think about these modern security workloads, the term zero trust comes up. Zero trust can be a little confusing because it's often not well defined. We believe in the zero trust model as well, but I want to start by defining it and talking about how we're addressing zero trust very specifically as part of its definition.

In a zero trust world, it is the idea of moving towards identity-based controls as the source of all security. When we think about the things that need to have security and the identity-based controls attached to it, we come up with these four primary categories: Machine authorization and authentication, machine-to-machine access, human-to-machine access, and human authentication and authorization.

These are the four broad categories that address all the needs of our security and identity-based approach. The goal of this is to be able to trust nothing. With a zero trust approach, there's zero trust — we want to trust nothing, but to do that we have to authenticate and authorize everything using these known identities.

So, given that, let's go through each one of these categories and more concretely define what we're talking about here.

Machine Authentication and Authorization with Vault

Services that are nonhuman actors within your infrastructure need to have the ability to authenticate themselves and authorize themselves to access different parts of the infrastructure. This might be database credentials, data itself, or each other.

If we start with this problem for machines. We took a look at this with Vault. Vault is a piece of software that came out in 2015. With Vault, we were addressing this machine authentication and authorization problem directly.

The Vault Approach

Vault works like this — on one side, you have clients that want to access some secret material. That could be a static set of secrets, database credentials — it could be other sorts of credentials, etc.

To do that, they first need to prove their identity. They do this using a dynamic approach. They use the identity that they have — and this identity may change depending on the environment they're in.

If we're talking about an EC2 instance on AWS, we're likely using our AWS credentials to prove our identity. But if we're talking about an on-premise service, we're probably using something like Active Directory or some other on-premise solution to prove our identity. Vault allows you to mix these depending on who is talking.

First, we authenticate using that identity with Vault. Then, based on a policy within Vault, we're able to authorize whether that identity can access the secret material that they want to access. Assuming the policy does pass, the client gets access to that secret material.

Using this highly dynamic approach, Vault has been able, in a very flexible way, to support clients on any platform — traditional and modern — and bring it all together to access any sort of secret material. Whether it's plain data like files and key-value, or something more dynamic, like generating SQL credentials on the fly.

Vault has been popular. It's grown tremendously in five years. In the past year, there have been over almost 16 million downloads of Vault. Of the 16 million downloads, over 600,000 were on the Vault and Kubernetes integration alone. Vault now serves trillions of secrets every year by our users and customers.

ABN AMRO: A Vault Case Study

Over 70% of the top 20 US banks are now using the commercial version of Vault. To showcase a bank that is using Vault, I'd like to introduce Sarah Polan, who's going to talk more about how ABN AMRO uses Vault — Sarah.

Sarah Polan: Thank you for the introduction, Mitchell. At ABN AMRO, we're currently working to enable security through automation. So, we received a directive from our CSO that indicated we needed to direct and automate security decisions.

I think that's an excellent decision. However, how do you logistically manage that for 26 different business applications — all of which are leveraging different technologies, and with regulation requirements across 19 different countries, for 18,000 employees and contractors. It's not a small feat.

Enabling Security Through Automation

We decided to focus on our secrets hygiene. This means keeping our secrets in a safe spot and teaching teams how to leverage that lifecycle. We decided we wanted to have a centralized solution — something that could handle multi-cloud and also on-prem solutions. Secure multi-tenancy was non-negotiable — we needed to be sure that whatever we were using was limited to that singular space.

Last but not least, we thought support was incredibly important. Not only vendor support — knowing that they would be there if we had an issue or could help us with the underlying architecture — but also the community support around it. Something that our developers themselves could go and leverage should they have a question about the best way to integrate Vault.

Why We Chose Vault

When we initially chose for Vault, we thought we were going to be leveraging a secure, stable method for storing secrets — anything that a normal Vault would do. We made this more defined by indicating that we wanted this to be for authentication and authorization use — for identity and access management.

At the time, we started looking at different solutions. We were saying API keys, certificates — specifically static certificates — usernames, and passwords. We were noticing a lot of database credentials that needed to leverage privileged users for databases.

As we discovered a little bit more about Vault, we wanted to allow teams to use the dynamic secrets. We thought that would help our security positioning the best, and we thought it would be best for teams to be able to leverage that. We started looking at that, particularly for ephemeral workloads.

Lastly, we needed to empower teams to automate secrets to their relevant use case. We had this great plan where we were going to create a campaign around secrets management and teach teams how to integrate Vault for their use case.

Vault Exceeded Our Expectations

Well, as with any use case, things evolve. Vault has actually provided us with a fully onboarded solution. For teams, the onboarding is completely automated. They don't have to know about Vault. There's also no human interaction — and for us, that's key. That means no human eyes — secrets are actually secrets.

We've also shifted our narrative on TLS certificates. Instead of having a large CA infrastructure, we started looking at how can we shift this to dynamic certificates that are shorter-lived and will also increase our security posture.

We've opened the dialog for Encryption as a Service. As many of you know, encryption is incredibly expensive. It's difficult to implement within applications. So the fact that we might be able to do this with Vault — well — there's a huge business case around that.

But lastly — and most importantly to me — is that it's permitted a CISO team to become enablers. Instead of standing in the way of our development teams, we are now able to leverage Vault to maintain their velocity and, in some cases, maybe even increase their velocity.

Back to you, Mitchell.

Mitchell Hashimoto: Thank you very much, Sarah.

Vault Customer Feedback

Over the past five years, there have been millions of downloads and happy users using Vault. And over those five years, we've continuously listened to feedback improve Vault. Today, the most common two pieces of feedback that we hear are: 1. Finding the skills necessary to use Vault is sometimes challenging 2. Letting up and running with Vault can take a little bit longer than people would like. We wanted to address these two pieces of feedback.

Addressing the getting started with Vault challenge is there are multiple delivery options to use Vault. You could continue to use the way Vault works since 0.1 — which is that you can manage it yourself. You could download Vault, run it on your own infrastructure, and you could run this wherever you want. That's what existed up to today.

The HashiCorp Cloud Platform Vision

The second is thinking about a cloud service and managed Vault. Earlier this summer, we talked about how a cloud-based Vault offering was on the way — and that cloud-based Vault offering would be based on something we call the HashiCorp Cloud Platform.

The HashiCorp Cloud Platform is based on three main goals. We want to provide push-button deployment of our software. Second, we want all the infrastructure running the software to be fully managed. You shouldn't have to worry about OS patching, spinning up infrastructure, etc. Third, we want to provide one multi-cloud workflow for all our tools. We want this to be able to work across different cloud platforms.

Announcing HCP Vault Private Beta on AWS

Based on HCP, we're proud today to announce HCP Vault on AWS. This is now available in private beta. With HCP Vault, you get these three pillars of HCP.

Push-Button Deployment

You log in, name your cluster, choose a network to attach it to, click create. And in a few minutes, you'll have a full running Vault cluster. All of this is based on fully-managed infrastructure. You don't have to provide any servers or spin up any on your own — we handle all of that for you. We also handle any of the security issues, OS patching, upgrades, etc., associated with that infrastructure.

One Multi-Cloud Workflow

This is all built for one multi-cloud workflow. This means that — while today we're announcing HCP Vault on AWS — all of the features and underpinnings are there to support a multi-cloud replicated environment in the future.

You could see this based on our abstraction of networks in HashiCorp Cloud. In this screenshot, you could see the HashiCorp virtual network and how you could attach a Consul cluster and a Vault cluster to it. This virtual network tends to live in AWS. But in the future, we'll be able to use this to span multiple clouds automatically.

The Goals of HCP

By using it, you could get faster cloud adoption. You don't need to spend as much time setting up our software, learning how to operate it, etc. You could click one button and get it up and running. If you could get our software up and running easier, that will quickly increase productivity for your applications. They can be in consuming our software right away.

Third, it gives you that multi-cloud flexibility much more easily. Software like Vault, Consul, and others make it much easier to use a common set of APIs to do things like security or networking in a multi-cloud way.

HCP Vault on AWS is available in private beta today. You could sign up for the beta by visiting hashi.co/cloud-platform and requesting access. That covers machine authentication and authorization. Next, I want to talk about machine-to-machine access

Machine-to-Machine Access with Consul

Once you've established identity with machines, they then need to communicate with each other and prove that — on both ends — the correct people are talking to each other.

When we think about this, it’s a networking problem. Two machines are trying to communicate. And for that, we’ve built Consul. Consul is a tool that provides service networking across any cloud. To do that, Consul provides three primary pieces of functionality; service discovery, a multi-cloud service mesh, and network infrastructure automation. I'm going to dive into each one of these.

Service Discovery

This is the original feature that came out with Consul 0.1. Service discovery provides a global catalog — a global view of all available services currently deployed in your network. Along with the service availability, we cover the health of that service. Is it healthy? Is it unhealthy? Is it running? Is it not running? And you could see this in one UI — in one view — across every service in your infrastructure.

Multi-Cloud Service Mesh

Built more on top of this service discovery side, we now can attach identity and facilitate networking between multiple services. The service mesh lets you ensure that every connection is authorized by proving identity on both sides — as well as encrypted to protect the data in transit. The service mesh we built is multi-cloud, so you could run different endpoints in different cloud platforms and have the networking work throughout all of it.

Network Infrastructure Automation Feature Set

Utilizing Consul’s ability of this real-time global catalog, you could do things such as updating load balancers in real time. You don't need to submit a ticket or wait to manually update something like the nodes in a load balancer. You could now use the API that Consul provides and the real time updates that it has — and dynamically update that load balancer configuration. With Consul, we provide a number of tools beyond the API for you to do that.

Announcing HCP Consul Public Beta on AWS

Earlier this year, we talked about HCP Consul being in private beta. And today, I'm happy to announce that HCP Consul on AWS is now available in public beta. You can sign up for HCP Consul at hashi.co/cloud-platform .

To review — HCP Consul is just like HCP Vault that we just talked about. With HCP Consul, you could log in, click a button and get a Consul cluster up and running on AWS in just a push of a button.

Human Authentication and Authorization

Armon Dadgar: As Mitchell said, he already did a great job covering the first half of the spectrum in terms of how we think about machine authentication and machine-to-machine access.

The other half gets a little bit strange when we start to bring people into the mix. The moment we start to bring people, we also introduce a similar almost parallel challenge on everything we had with machines. Which is how do we start by asserting some notion of identity? We each have our own personal identities. But how do we make it something that the computers can trust and understand that is cryptographically verifiable?

That's where we have this pillar around human-based authentication and authorization. It’s about establishing that identity in a trusted, secure, way — such that then we can use that identity downstream with other systems programmatically.

This is a problem — as you might imagine — we've had for a very long time. Ever since users started interacting with systems, they needed some way to prove their identity to the system.

Establishing Human Identity

Maybe starting with this bucket, we've seen there are well-established patterns of how to do this — and these have evolved over time with different generations of technology as well.

The most basic approach is simply distributing some form of credentials to users. This might be usernames and passwords, and they're logging in to the system directly. This might be some form of certificates. It could be a hardware device that they use to assert their identity. But we're distributing something to the user, and the user is providing that back.

As we get a little bit more sophisticated, we don't necessarily want to have a user have a specific username and password for every system they might interact with. This starts to become cumbersome as you have many different systems — many different users — so you start to move towards systems that provide a single sign-on experience.

In a more traditional private datacenter, this might have been powered by something like Active Directory. It might have been powered by something like OpenLDAP — where the user would provide their credentials once to the Active Directory server or the LDAP server. Then that identity would flow downstream to other systems so that you'd get the single sign-on.

As we're moving to a more cloud-based architecture, the same pattern exists, but we're starting to use a more cloud-oriented set of systems. This might be Okta, it might be Ping, it might be ADFS — but the idea is similar. We would do our login one time against these cloud-based systems — then those cloud-based systems provide our identity, our authentication, and authorization to other downstream systems. There's a very common pattern that we've applied here but generationally has evolved over time. So, the next piece of this is: We have asserted a user identity, but how do we now interact with the machines, the applications, the services we want — that maybe understand those identities, or maybe don't understand those identities.

Traditional Workflow for System Access

When we talk about the traditional workflow for accessing these systems, it starts when a user probably requests access to some private resource. This could be — let's say — an internal database that's running on our private network. Maybe they're a database administrator — they need access to that database to perform routine operations.

They have probably provided a set of credentials that allow them to get on to the private network to begin with. This could be VPN credentials, could be SSH keys, etc. Then they need to know the hostnames and IP addresses of the database so that once they're on the network, they know what to connect to. And lastly, they need a set of application-specific credentials. In this case — database username and password — to be able to interact with the endpoint system.

Then their workflow goes left to right. First, they have to log in to the VPN server or the SSH server using those credentials that they have. Next, they need to request access over the network to that private system using the hostname or IPs they know about. Then once they're connected to the database, they would provide the application-specific credential — in this case — the database username and password. Then they'd be connected, and they can interact and perform whatever operations they need.

Now, there's a number of challenges with a traditional approach. This ties back to Mitchell's earlier point around this transition we're going through from static-based systems to more dynamic environments.

Onboarding Is Difficult

How do we think about onboarding new users? For every new user, do we need to distribute a set of new SSH keys, VPN credentials, database credentials, etc.? What about when that user leaves? What about having to do periodic password rotation or credential rotation? You can start to see how this onboarding process becomes cumbersome at scale.

User Has Network Access

Next, the user is connecting directly to a VPN or directly to an SSH Bastion Host. That — in effect — brings the user on to our private network. While that has its advantage and that the user can now connect to these resources that are on the private network, it also has a disadvantage. The user can connect to all sorts of things that are on our private network. We don't necessarily want the user directly on the private network.

IPs Are Brittle

As a result, because the user should really only have access to a handful of systems, we typically would deploy a firewall in between the VPN and the target systems — or in between the SSH Bastion and the target systems. But that firewall operates at an IP level. There's a set of IP controls that constrain which set of users or which set of IPs have access to which set of IPs.

The challenge with this IP-based approach is that it's brittle. It works great in a very static environment. But the moment we have endpoints that are auto-scaling up and down, we're deploying new services, maybe we're running on Kubernetes where — if a node dies — the application gets moved to a different node and a different IP address. In these very dynamic environments, we have the challenge of keeping these IP rules — these IP controls — up to date. It becomes very brittle as our environment gets more and more dynamic.

Credentials Exposed

The last piece of this is the user has to have those endpoint credentials — the database username and password in this case — to connect to the target machine. This means we're disclosing it to the user. That user could potentially leak it, leave it on a passwords.txt on their desktop, post it into Slack, etc., — so we create an additional risk of those credentials to get exposed.

Dynamic Workflow for Access

A different way of thinking about this is to use identity as the core primitive. This would start where the user again logs in with their trusted form of identity. They don't use a specific set of VPN credentials or SSH keys that were distributed to them. Rather, they use their same single sign-on and use that one point of identity that they already have that the users onboard when they start.

Next, ideally, we would select the system we want to connect to from a set of existing hosts or services in a catalog. We wouldn't want to know a set of DNS names or hostnames or IP addresses that may or may not change in advance. We'd rather look at a dynamic catalog that shows us what we have access to.

The next piece is we want the controls of what we have access to, to not be at that IP level where it's a dynamic — but rather at a logical level, where it's service-to-service. I want my database administrators to have access to my set of databases, regardless of what the IP of those databases is.

Lastly, we want the connection to happen automatically to the endpoint service without necessarily giving the user the credentials underneath the hood. This has a number of advantages — if we can do this.

Onboarding is Easy

One is that onboarding and offboarding is dramatically simplified. We don't need to distribute a bunch of specific credentials. We don't need a rotation workflow. We don't need to offboard that user. We add them to our IDP or identity provider, and then we remove them from our identity provider. And everything is linked to that.

Network Remains Private

Next, because we're selecting a host from the service catalog, we don't need to give users direct access to the network. They don't need to know what the internal IP address is. They don't need to be on that private network because they just care about the target host — the target service that they're trying to access.

Configuration Is Stable

The advantage of moving the rules up from an IP level to a logical service space level is it's much less brittle. Now, we can put those services in an autoscaling group — scale them up and down. We can have them on a node, do a failover, and move the app to a different node. We can deploy net new services. And we don't have to worry about changing our controls all the time to keep pace.

Credentials Not Exposed

Lastly — because we're not distributing the credentials to the user themselves — they don't necessarily have the database username and password. When they connect automatically, they're authenticated against the database, and they've never seen the database credential — making it that much harder to expose it or cause an additional data leak.

Announcing HashiCorp Boundary

How do we make this workflow real? How do we move towards this identity-based model, rather than the more static traditional workflow? Today, I'm very excited to announce a brand-new project called HashiCorp Boundary. It's free, it's open source — and I want to spend a little bit of time today diving into what it is and how it works.

At the very highest level, Boundary is trying to provide the workflow that we talked about focused on that identity-centric workflow for how do we authenticate and authorize access to systems. This starts — as you would guess — with a user trying to access an endpoint system.

First, that user is going to authenticate themselves through one of these trusted forms of identity. Boundary makes this a highly pluggable thing. Whether that identity is being provided by Okta, by Ping, by ADFS, by Active Directory, it doesn't matter — there's a pluggable provider, much like Vault that allows that identity to be bridged in from whatever existing IDP you have.

Once the user is in, we have a logical-based set of authorizations in terms of what that user can access. We might map that user into the group of database administrators. We'll say database administrators have access to a set of databases.

The notion of what a set of databases is, is dynamically maintained in a catalog. That catalog can be programmatically updated using a Terraform provider. It could be kept in sync by integration with Consul — where we're querying Consul’s view and service registry of what services are where. But it can also be integrated into other service catalogs such as Kubernetes, or AWS, or Cloud APIs.

These other systems have a notion of these services and the ability to tag them or add different selectors. Being able to import those and reference those in a dynamic way — rather than have to deal with static IPs — makes this much simpler to manage.

Lastly, when the user goes to connect, we don't want to provide the credential where we can until we can integrate with a system like Vault to provide the credential dynamically just in time.

In certain cases, we have no choice — we might have to provide the user with a static credential. But in cases where we can use Vault's dynamic secret capability, how do we generate a credential unique to that session that's short-lived and time-bounded? So the user can connect to the database with a unique credential for that session that they never even see? And at the end of their session, that credential can be revoked and cleaned up and isn't a long live static credential that we have to think about and manage?

The Goals for Boundary

The goals of Boundary are fourfold.

On-Demand Access

One is how do we do this on-demand access that's simple and secure. We don't want you to have to do a whole lot of pre-configuration and pre-setup. We want it to be very much push-button and on-demand.

Dynamic Environments

Two is we acknowledge the world is becoming much more dynamic, much more ephemeral. How do we support that? That's around a few of these different pieces. Making the system very API-driven and programmatic — integration with dynamic service catalogs, integration with dynamic secrets, and leaning into this notion that we don't want to manage static IPs because our world doesn't consist of static IPs.

Easy to Use

The other piece is making the system easy to use. We want it to be user friendly because you have administrators who are configuring it and maybe understand the system in depth — we have end users who don't care how it works. They just want to have access to these endpoint systems. And we want it as easy to use as possible.

Free and Open Source

We've seen time and time again with HashiCorp is that the best way to make these products successful is to build thriving communities on top and around them. We're excited to do exactly that with Boundary as well.

A Deeper Dive into Boundary

Here's a screenshot where you can see Boundary's UI. You'll see four different boxes that describe logical services that we might want to connect to. You'll notice we're not talking about IPs. We're not talking about a low-level host. We want to connect to this bucket of high-level services and the environment they're running in. This ties back to the three aims. Identity-based access and control, nothing is IP driven, nothing is host driven.

Two is we want an automated access workflow. We want this to be API-driven, to integrate it into our scripting environments, CLIs, automation tools, CI/CD pipelines, etc. So it’s a rich API that allows all of this to be automated.

Then lastly, how do we have first-class session management visibility and auditability of what's taking place? If we have privileged users accessing sensitive systems, we want to have full visibility of when and where that took place — and have that type of insight whether for security or compliance reasons. Of course, we know this is designed for practitioners because we also have a dark mode UI that's visible here as well.

Boundary Connect

Here's a different example of interacting, which is what we expect the day-to-day to be like. If I'm a Boundary end user — I'm not an administrator — I'm just trying to SSH into a target machine, how can we make this dead simple?

Here's how simple it can be; it's a single command. It's boundary connect ssh and the target that we're trying to get to. You can see that we are dropped right into a shell — and that's the goal. Behind the scenes, there's a whole lot of machinery that's making this possible. Our SSH client — that's running locally — is talking to a local agent that's spun up as part of this command; as part of boundary connect. That agent is doing the authentication for us against the gateway for Boundary and then establishing a connection to the Boundary gateway.

The Boundary gateway is then authenticating and authorizing us — and then connecting back to the endpoint system. Now we have an end-to-end connection, going from our machine to the gateway to the target environment.

But to the degree possible, all of this is automated and invisible to the user. They can continue to use whatever local tools they're comfortable with. Use your local SSH tool, use psql — use whatever local tooling — and you're always talking to a local agent that's proxying all the traffic back; very similar to how SSH port forwarding would work.

We're very excited about Boundary. Today is the launch of the product, and the 0.1 is available. Please go check it out at boundaryproject.io. It’s also on our GitHub page.

If you find issues, give us that feedback and engage with us (in our community forum). We’re super excited, and there’s going to be more content on Boundary later in the day.

Keynote Summary and Conclusion

Taking a quick step back, there are these four pillars as we think about zero trust security.

  1. How do we assign workload identity?
  2. How do we authenticate and authorize those machine workloads? That's our focus with Vault (the first 2 pillars).
  3. How do we take that identity and broker machine-to-machine access in a secure and automated way? Our big focus there is with Consul.
  4. As we bring humans into the loop, how do they authenticate and connect to these systems as well? We're introducing Boundary to look at solving this problem.

Then there are a whole slew of great existing solutions in terms of: How to do single sign-on and bring human identity in a scalable way.

All of this is part of our broader goal as we think about the security umbrella — the security focus at HashiCorp — how we do security the right way in these modern environments. Our strong conviction is the zero trust approach and identity-driven approach — where we have explicit authentication and explicit authorization for everything — is the right way to do this moving forward.

More resources like this one

4/11/2024FAQ

Introduction to HashiCorp Vault

Vault identity diagram
12/28/2023FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

3/14/2023Article

5 best practices for secrets management