Security Keynote: A More Secure World with Zero Trust — HashiCorp Vault, Boundary and Consul
In this talk, the Product and Engineering leadership from the Security Product Line at HashiCorp will show you our vision for identity-driven security and networking controls with high-quality user experience that can help bring the world to the safer cybersecurity posture of "Zero Trust" using HashiCorp products Vault, Boundary and Consul.
Speakers: Blake Covarrubias, Harold Giménez, Vinod Muralidhar
» Transcript
Vinod Muralidhar:
Hello, everyone. Welcome to the security keynote at HashiConf Europe. My name is Vinod Muralidhar, and I'm a product lead at the security product line here at HashiCorp.
Along with vice president of engineering Harold Giménez and senior product manager Blake Covarrubias today, we're going to talk about zero trust security in the context of a multi-cloud world and how HashiCorp products Vault, Consul, and Boundary can help you with that.
I wanted to start you off with this number: $6 trillion. That's the estimated security-related losses expected in this year alone. And that number is set to go to $10 trillion by 2025.
In the last 10 years, 20 companies have experienced massive data breaches to the tune of $1 million or more. And of those, 90% now use Vault.
» Securing a Datacenter: Then vs Now
But stepping back a bit, securing a datacenter in the past was a lot easier, or at least a lot less complicated, because you used to have 4 walls or the perimeter around your datacenter that helped secure your datacenter. With that, what's inside is trusted and good and what's outside is untrusted and bad.
But what happens when your apps and infrastructure themselves extend beyond these 4 walls into multiple datacenters or to cloud providers, or a combination?
That is the reality of where we are right now. And that is a direct result of the effects of digital transformation. You used to have a traditional datacenter that is a lot more static in the way that it's set up, and the infrastructure itself is dedicated, thereby providing the perimeter or that wall around that datacenter that provides security.
In the modern datacenter, the word is dynamic. The infrastructure itself is shared between a private and public cloud or a combination.
What does it mean for security? The effects of digital transformation in the security context means that we have no traditional datacenter, that static environment with those 4 walls offering you the ability to control all ingress and egress points and to set security perimeters using your IP addresses and firewalls.
You also had a much better way to control and monitor all traffic within your infrastructure and incoming traffic from outside of your perimeter. You also had the ability to set up physical access and restrictions to your datcenter and the critical resources within that.
Fast forward to the modern datacenter, where everything is a lot more dynamic. Infrastructure is set up and brought down at demand, and the access and networking parts are not only dynamic, but software-defined for access and control.
And identity forms the center of this kind of security where you no longer have those 4 walls.
» The 4 Pillars of Zero Trust Architecture - Security for the Modern Cloud Datacenter
At HashiCorp, we think about this is in terms of 4 pillars. The multi-cloud world and the zero trust security model within that falls under these 4 buckets:
Machine authentication and authorization
Machine-to-machine access
Human-to-machine access
Human authentication and authorization
We'll be talking about products within HashiCorp that help with each of these problems and also how they all work together in providing zero trust security to organizations.
» Machine Authentication and Authorization
The core mantra of the zero trust world is that you trust nothing and you authenticate and authorize everything at every point.
That's where we bring in HashiCorp Vault. HashiCorp Vault was launched in 2015 and since then has helped developers and companies secure their application and infrastructure by centrally managing and storing secrets—secrets could be tokens, passwords, API keys, certificates, all of that—and protecting their data by methods like encryption.
I'll talk a little bit about both of those.
The Vault approach starts with the fundamental notion that cloud security leverages trusted sources of identity that your environment already has. The fundamental difference between a lot of systems before Vault and what Vault does is we make use of your infrastructure, your trusted sources of identity, to provide that authentication.
Then we layer in from Vault an authorization using a robust policy mechanism that allows you to access these secure and critical resources.
Your identity brokering means that if you are an organization that has Active Directory, LDAP, a cloud provider's solution like IAM, or a cloud identity provider (IDP) like Okta or Ping, you are able to plug that into Vault.
What Vault does is, as these different applications and users are authenticating into Vault, it is able to broker those into a single entity within Vault upon which you can apply those policies.
Vault's customers primarily use Vault for 2 use cases. One is a secrets management use case that allows you to centrally store and protect secrets across clouds and applications, but also to protect your data using techniques like encryption.
Last year, Vault released our Transform Secrets Engine that allows application developers to directly do data masking, format-preserving encryption, as well as data protection techniques such as tokenization.
Vault's been very successful. We already have shown that in numerous statistics, but just in the last year, we had over 15 million open source downloads of Vault, and trillions of secrets are being served by Vault every single year.
In addition, Vault is being trusted by a lot of large corporations, and 70% of the top 20 U.S. banks, use HashiCorp Vault for their security needs.
n order to make things simpler for our customers and end users, we launched HCP Vault, which is a HashiCorp cloud platform managed services offering of Vault, earlier this year.
The goal there is primarily to help customers focus on their business problems and not necessarily focus on operations of Vault by providing a push-button deployment, a fully managed infrastructure that HashiCorp manages,
Eventually, we will be able to offer a multi-cloud workflow.
I'm going to pass it off to Blake Covarrubias, who will talk through a little bit more about what happens once the authentication and authorization is complete and these applications have to talk to one another. Over to you, Blake.
» Machine-to-Machine Access
Blake Covarrubias:
Thanks so much, Vinod. Before talking about how Consul enables machine-to-machine access, I'll provide a quick overview of Consul. Consul is a service networking platform which helps you discover and securely connect to any service across any environment.
Consul has 3 main use cases:
Service discovery provides an up-to-date registry of applications across your infrastructure environment.
Service mesh which provides secure service-to-service connectivity and intelligent routing for application communication.
Network infrastructure automation allows for automating changes in your network infrastructure based off of changes to Consul's service catalog.
All of these play a part in a zero trust solution. But today I'm going to focus primarily on Consul service mesh.
A service mesh is an infrastructure layer that facilitates service communication using a proxy that's deployed alongside each application.
Zero trust communication is enforced within the service mesh by using an identity-based security model in which proxies mutually authenticate the identity of source and destination services and protect application communication using TLS encryption.
Connections within the mesh are authorized based on service identity that's encoded in these TLS certificates, instead of lower-level network information. These certificates in the mesh can come from Vault or your existing corporate CA.
Consul enables organizations to shift toward an application-based networking model.
By adopting this framework, Consul helps organizations decouple network operations from IP address and allows focusing more on logical services. It also enables services to be discovered and secured using that same logical identity.
These are significant networking changes, which can be challenging for some organizations to manage, depending on the resources available to them. But HashiCorp is here to help with that.
Now I'm going to pass it over to Harold Gimenez to talk about zero trust for human-to-machine access. Over to you, Harold.
» Human-to-Machine Access
Harold Giménez:
Thank you, Blake.
As we continue down our exploration of zero trust security, let's talk a bit about identity.
Identity is what allows us to do authentication and authorization of humans. The way this works traditionally is a user is added to a machine or a service of some sort, and there's a ton of services that users need to be added to as they get onboarded into organizations and enterprises.
The idea of an SSO directory came along, and there was some standardization back in the day with things like Kerberos.
But moving forward to more modern identity services, we see things in more cloud-provided services like Okta, Ping Identity, and Azure ID.
The idea is the same. We add a user on onboarding to the directory and we define what group they're in. Based on this group, users have permissions for doing certain things. For example, a user may be added to the engineering group, and by doing that, they may get access to source control, for example.
But that's not the whole story. It's not only about what online services you can access. If you take this a bit further and talk about infrastructure and how access to infrastructure is controlled, then it gets a little messy.
Let's talk a bit about human-to-machine access and how that could work in the world of zero trust and the tooling that we offer. Let's talk a bit about HashiCorp Boundary, which is all about remote access for modern infrastructure.
Traditionally, the way we would onboard new users into the infrastructure would be that a user gets added to the Active Directory or to the identity directory.
But that doesn't necessarily transcend across all of the infrastructure, which, by the way, is moving all the time.
In a more traditional world, we're talking about allowing the user to enter the network perimeter.
For doing this, we used a tool called VPN, virtual private networks. But even configuring that is pretty complex. SSH keys need to be generated and shared across public keys onto the user's terminal as well as the backend services. Credentials need to be furnished.
A number of steps need to be made to have users access these networks. And that felt OK in a world where, in order to access these services without VPN, you had to physically be on premises and have access to that network.
This was a step forward in terms of allowing access to these networks from remote locations such as people's homes or a coffee shop.
But it's very difficult. It's a lot of steps to get that going. It's a burden on the user as well as on the IT teams that handle all this. It's cumbersome, to say the least.
It also doesn't solve the zero trust problem because really what you're doing is you're saying, "In order to access these hosts, be they SSH machines or be they database services or any host that's within the perimeter of a corporate network, well, now the user has access to that network."
Remember, we're in a world where if you have access to a network, you're trusted to interact with other hosts within that network. So the idea of authenticating every single interaction wasn't really the prominent way to think about this.
hat's akin to finding a stranger in your living room and offering them a drink. You wouldn't do that. The fact that they're in the network or in your perimeter doesn't mean that they're a trusted entity. Same thing here.
Now you have all these users that have access to these networks, and it's not that the users are necessarily going to harm you, but they could get hacked. Their machines might be compromised.
This is a very common way to access and create attacks on corporate networks, through the weak links, which are essentially humans, users who already do have access to these networks.
That's something to be avoided.
he next step from there is, "If I'm inside the network, I'm going to reach some hosts and some databases or anything like that, so let's put a firewall there. Let's make sure that the right hosts can talk to each other via IP rules."
But remember also that modern infrastructure is actually about IPs being a little bit less static and underlying hosts moving around, and that just provides flexibility for automating recovery of state of hosts and doing upgrades and maintaining security across the board by having immutable infrastructure that can be changed continuously.
So there are some tricks that people play around with like DNS pointing to different underlying hosts. But ultimately, IPs are a brittle construct for how to think about this. It's not a problem that's solved with firewalls, necessarily.
he last step is, "Now I'm in the network and I have access to the appropriate host target. Now I need credentials to connect to that database."
That is also a problem. Now the user has the passwords or the credentials to connect to that target host.
That's an issue, because those credentials can leak pretty easily.
And there are much better practices for dealing with those credentials. Those credentials ideally are one-time use, for instance, or very short-lived so that, even if they leak, they can cause very little harm. We're narrowing that window of attack.
And in many cases, as we'll see in a second, users don't even need to have access to those credentials.
let's go through a more dynamic and more modern approach to this.
In a more modern world, same thing would happen in the beginning, which is that a user gets onboarded and they get added to the directory service. That's a one-time deal.
A user gets added to the directory service and they get added to the appropriate groups. For instance, an engineer gets access to a number of services in the backend. That's pretty easy.
There's no handshaking of SSH keys and additional steps required. That's a huge step forward.
As we think about access and security in terms of the identity of the people that can be verified, you start thinking about multiple other ways to improve the way security can work in the general case.
his is a very stable approach. Authorization is based on roles and services. All engineers get access to source control. Easy.
All analysts get access to the database. So when I add a user to the analyst group or role, everything else gets configured automatically.
Then there's selecting hosts from a service. This is a functionality in Boundary where the hosts that you have access to appear in your list of hosts.
From there, it's a click away. You click on it and you establish a session to that host even if you're not inside the network.
The way that works is pretty interesting. There's a catalog of hosts that get filtered around and authorized based on the configuration in Boundary.
Interestingly, the host catalog in Boundary can be dynamic. So you might have AWS hosts that come and go, EC2 instances that come and go. for instance, or RDS databases, for instance, that are provisioned and deprovisioned, and it's this morphing machinery that's going on.
That gets pulled into Boundary as a list of hosts, which then eventually get tagged and appropriately grouped together to provide access to the user.
rom the user's perspective, all of this is going on in the backend and all they see is, "Here's the logical systems that I get access to and it's a click away to go and do that."
Notably, there isn't a need for credentials to even make their way into the user itself because Boundary can inject credentials as the session is being established without ever making its way into the user's laptop or terminal.
This is really powerful because Boundary can interact with Vault and interject the protocol-level session and say, "Does this user have access to that host? Yes? OK. Let's generate now."
Vault can generate a credential on the fly and set a time to live on it. So it's short-lived and injected into the protocol packets and establishes a connection to the target host.
It's extremely powerful because, all of a sudden, there was no credential to leak. So it doesn't even get to that point, which is extremely powerful.
o summarize, a user can authenticate with an identity provider. This is an investment that most organizations have already made. Most organizations already have some form of identity directory, so we can piggyback on that.
Newer protocols for interacting with directories exist. Integration with them is quite natural and easy.
Then Boundary can authorize a set of hosts that can be dynamic. They may be coming from a Terraform run and automatically updating the host catalog in Boundary, or they could be reaching out to the hosts that Consul has access to and also providing that connection.
ome of the public clouds offer APIs and allow Boundary to repopulate the host catalog appropriately.
Finally, access to the target host becomes a click away. And because Boundary is smart about what the underlying protocol of that connection is, it can do smart things with that.
Most interestingly, it can inject credentials and never make those credentials available to the user. Those credentials can be managed optimally by Vault, which with Boundary is a strong duo to make it so that Vault can handle the dynamic credential provisioning and management while credentials never make their way to the customer.
That's Boundary in a nutshell, and how human-to-machine connections happen with the stack, in this case, with Boundary and with Vault.
Boundary is a fairly new project. We open sourced it late last year. One of our primary goals is to remain open source and offer that as a community project.
We encourage you to get involved and follow along and then provide your feedback, and your code as well. Happy to receive contributions and collaborate with the community on that.
As far as the project's goals, let's talk about on-demand access.
This, alongside dynamic environments, makes it so that the host catalog is continuously updated, and it's very easy to access the right hosts and for an admin to manage what people have access to trivially.
We want this to be a tool that is built for developers, but also built for any end user. So it has to be very easy to use.
There's a nice UI for it. People who aren't super technical can use Boundary and get value out of it and solve these problems for organizations of every size.
hat is the full story. When we talk about zero trust, we're talking about not trusting connections or sessions that are being established just because they're within the network because some firewall let them in.
We're talking about authorizing every single interaction to prevent all sorts of attacks that have occurred.
There's a suite of tools that manage that all based on the identity of either the human or the machine, with machine authentication and authorization, for machine-to-machine access with Consul, and human-to-machine access with Boundary.
That is all we have for today. I want to say thank you to Blake and Vinod for joining me in this security pillars keynote.
I hope you have a great conference and that I see you in the internet.