Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
Presentation

Defense Against the Dark Arts: Building Security Through Adversarial Modeling

By using adversarial modeling in the process of building and testing HashiCorp Vault, we've created a product that stands up to some of the most dangerous security threats in the world.

Speakers

In this session we'll review how Adversarial Modeling has allowed Vault to stand up to some of the most dangerous security threats in the world, and even allowed HashiCorp Vault to defend data in the midst of real world data breaches and cyberattacks.

Transcript

Hi, my name is Andy Manoske. I am the principal product manager for the secure product line here at HashiCorp. Today we're going to talk about defense against the dark arts — or, as I like to focus on — adversarial modeling.

Who am I?

I've been with HashiCorp for the better part of the last four years and change. I joined HashiCorp as one of the first product managers in the company to focus on security. The reason I joined to focus on security was two-fold.

One, there’s a very exciting open source project called Vault that we had built that we were excited to start figuring out how to productize and build in an enterprise offering. Two, because most of my career had been spent in security.

Prior to HashiCorp, I've served in roles such as the head of encryption — and product security and defense systems at NetApp, as well as previously worked in security research at a company called Alien Vault that was acquired by AT&T in 2018.

When I wasn't building security software or researching threats, my focus was primarily on investing in companies that were building the next generation infrastructure as well as security technology.

Security is my passion. It's what I studied in college. I've spent the better part of my career focused on it. Ultimately, building software to defend against adversaries is important to me — both professionally and personally. Why are we talking about adversaries, and what are the Harry Potter references at the beginning of this?

Tales From the Front

When we think about building security software, the end justification of whether — or not — security software is effective is how it does in real-world cyberattacks. So, let's talk about real-world cyberattacks.

This is a company that you would recognize. This company was the subject of a several month-long data breach, or an adversary that had a lot of resources and was able to compromise their perimeter security — as well as breach the data security of a variety of reservoirs of data within that organization.

This is a fairly common attack style that you see within the last five years of the 2010s. An adversary exploits either a protocol issue or steals a valid credential through phishing, compromises perimeter security. Then — given time — is able to take a look at how various reservoirs of data within the organization are protected.

Given enough time, something like Kerckhoffs' principle takes hold, and they're able to — with the power of Google and a variety of other threat databases — interrogate the information there to find an exploit they could use to exploit around the reservoir of data security.

In this case, it was transparent database encryption. They were able to break transparent database encryption for that database by using an exploit for that database. The database used was of a stale version. And — as a result, through this automated set of tools that they had in their attack toolset — take user information, PII data, etc.

Vault was being used in this infrastructure too. The question then comes down to in this real-world data breach, how did Vault stand up?

Pretty well, actually. Vault was able to stand up against this adversary, their advanced toolset that was used to breach the perimeter security, as well as the reservoir security — and all the keys that Vault was storing. Because this was about Vault storing credentials — as well as encryption keys for a variety of encryption processes within that organization — were all safe. The adversary threw everything they had at Vault, and Vault stood up to it.

The Problem with Building Security Tools

This attack highlights really well that security means different things to different people. When you're building security software, questions about whether or not something is secure come into play in a variety of different ways.

Whether something is secure comes down to the data that you're protecting, the compliance requirements that you're subject to, and ultimately the threat model that you're facing. That variance can lead to — if you're someone like me, who is a product manager building security software — a very different set of experiences.

What Does Secure Crypto Mean for You?

Let me give you a good example of this. We've had situations where I've talked to Vault's open-sourced and enterprise users who say that security for them is something that stands up to hackers nebulously and won't fail when it comes time for them to rely on it. If it satisfies these conditions, it's secure.

At the same time, I've had situations that are more on the Vault Enterprise side, where we've talked to users that are saying, "Look, I have to build security not just for today, but security 15 years from now. The data that I'm protecting is going to live there for a very long period."

Topics like post-quantum cryptography come into play or building in things like duress passwords. Or if I'm in a FedRAMP environment in the United States, building in things like login banners or alignment of Vault to cryptographic standards like FIPS 140-3 start to come up.

These are very explicit requirements, and — to a layperson or someone else that doesn't live in a world of building government security for adversaries like spies — this could seem overly complex. Or, in some cases, overkill in general.

So, who's right here? Who is the barometer of security here? There's a very light-handed feather touch that Alexis Murphy is talking about. Security for her is important, but at the same time, there's a raptor trying to break into her room, and she's trying to use an IRAC system to manage a park and millions of credentials. Rough. She's got bigger problems on her hands.

Or John here who has been working in security for the last 30 years. That's probably not even his real name. He thinks you're a sketchy spy, etc., and comes to you with all 450 requirements of KMIP and slams them on the table and says, "Align with each one of them." Who is right in terms of what security means?

Who's Right? Both of Them

Well, like we said before, both of them are right.

Alexis

Alexis' primary goal here — when you think about the data that she's protecting — is PII data. When you investigate a little more, the GRC that she's aligning with — the governance and regulatory requirements that she's thinking about — don't give you explicit crypto requirements.

Saying that encryption is used is generally good enough. The adversaries that she's facing are primarily focused on financial crimes and extortion. They can be very well-resourced and advanced, but they may not have the ability to perform codebreaking or analysis.

The lack of explicit cryptographic requirements makes a lot of sense for Alexis. Again, there is a raptor trying to break into Alexis' room. She's got bigger issues to deal with.

John

His requirements sound extreme; they're very explicit. But that's because of what John is protecting. In this case, he's protecting — as you dig deeper — US Federal data, data that are subject to requirements like Common Criteria. It might even be data that are for the US military or the intelligence community.

In both of those situations, the secrets that are protected are archived for a very long period. We have to think about adversaries with a lot of resources — a deep background in breaking cryptography. This is why the regulatory requirements associated with John's data probably have explicit crypto requirements. And ultimately, they have a long time to be able to develop an attack against John.

But when we were talking about the real-world attack against Vault, that adversary had months. But what if that adversary had a decade and a half to stage an attack against Vault? Well, now we've got to build for security that is very explicit but also built for a world where maybe the computing power doesn't exist yet to be able to breach it.

When we build Vault, the challenge is we've got to build security software for both Alexis and John. How do we build a comprehensive set of tools to withstand their adversaries and their requirements for defense?

This is where a lot of the Vault teams, including myself, rely less on our background as defenders of security and more on our exposure to how adversaries work to build a model for how we build — what we call — adversarial modeling and defense within Vault.

To Build Defensively, Think Offensively

Adversarial modeling is not a new topic. This is something that has come up a lot in security and threat intelligence research. Doctors Dolev and Yao — in their seminal work on this — give a good description of adversarial modeling, but from back in the eighties.

To quote them, "[i]n order to think about the security of a protocol, we need to define the adversarial setting that determines the capabilities and possible actions of the attacker."

What does that mean? If I want to build good security, I have to think about who is trying to breach that security. We need to build a framework that defines, "Given the various types of adversaries we know, what data and intelligence we can gather on those adversaries?" Ultimately it's a good idea of what we don't know — the known unknowns. How do we think about building security infrastructure that stands up to those kinds of adversaries? That can be resilient in the case of each one of those adversaries and threat models — throwing everything and the kitchen sink at your system?

Adversary Modelling

This is how we think about building security within Vault. More explicitly, the framework that we use for adversary modeling takes into account four key things. First, who are the adversaries that may stand up an attempt to breach Vault? When we think about building Vault comprehensively, again, we've got to factor in both Alexis' and John's adversaries.

We need to factor in the adversaries trying to — possibly in the future — wield quantum cryptanalysis against Vault or dedicated codebreaking capabilities to attack Vault's data. At the same time, we need to build security in such a way that Alexis can easily deploy Vault and never have to worry about it because again — a raptor is trying to break into her room, and that's a bigger primary issue for Alexis at any given time.

We need to think about all of that range of adversaries. When we look at those adversaries, we need to try to understand three key things. First, the assumptions of what we know about them — and also what we don't know. The capabilities of each adversary: What can they do and bring to bear to attack Vault? Then finally, their goals. What are they looking for when they're trying to breach Vault's security?

I've highlighted here a few great security research topics on this. If you look at my favorite paper is, "The Role of the Adversary Model in Applied Security Research". There are a ton of different ways you can look at this, but this is the highlighted, great seminal paper that talks about how to construct an adversarial modeling framework. It's the one that we effectively use in Vault when we think about any feature or enhancement — or even bug fix — that has a security indication.

What is an Adversary?

First, we use this term a lot; adversary. What does an adversary mean? This is really important. In Vault's case, we define an adversary to be anyone who is attempting to access a secret or control within Vault — but has not been given explicit access. If you look at our security model in Vault Project IO, this is very explicit for a reason.

An Adversary May Not Always Be Always Malicious

Think about — if you've watched the Harry Potter movies — there's an example where Harry Potter accidentally blows up his aunt. Was Harry trying to murder his aunt? No — well, hopefully not, but you know, it happened. If you try to build a framework that defends Harry's aunt, you need to think about Harry as a potential adversary — even if he is not malicious in nature.

Adversaries don't always have to be malicious; they can be accidental. This is important in Vault because there can be times when a user is trying to accidentally access information, they shouldn't be able to access. We should try to think about Vault from a usability as well as a security point of view — how would you deal with that situation; with that user as a potential adversary?

Usability and Security Go Hand in Hand

If an adversary is trying to attack your infrastructure, they're going to typically exploit complexity. An area where there's been a lot of issues in the past has been in usability — with regards to using security suites and security tooling.

A good non-security example of this is any situation where a security admin — DevOps admin, etc. — has to ask for details from a user. Sometimes there is no usability to be able to get user information.

This has led to a situation where users now — in some cases — have to be reminded that admins may never request your credentials to log in because, in the past, there have been situations where someone might say, "What's your password again? I need to get in." etc. That's a usability problem. That's a usability problem that translates into a security problem that could be exploited for attacks like phishing.

Remember, an adversary doesn't have to be malicious. Harry Potter could accidentally get angry and blow up his aunt — but you need to build against that. Second, when you think about usability, there needs to be — especially for any security product — a concern about how that usability may impact the security of a tool.

Does it allow an adversary to be able to — because of the inherent complexity of a usability issue, or undergoing a workflow within that suite — exploit that to be able to go around the security or the cryptography protecting the sensitive information?

Adversary Assumptions

Assumptions are basically what do we think an adversary brings to the table? These are split into two key areas — when we think about the adversarial modeling framework.

The Environment

Where is the adversary attacking you from relative to where I'm trying to protect. If I'm building a perimeter defense tool like a firewall — an IDS/IPS system — the attacker is coming from outside the network.

I can assume a certain set of capabilities and attacks that would come given that we're going to assume that our threat model is coming from an external source — potentially. Vault does not have that luxury. We have to assume that everything has been breached — up to the time when an adversary didn't get into Vault.

This highlights a key point that we often summarize when we talk to Vault users — both in open source and enterprise. We will often say something like, "Vault is the bulletproof vest you wear in the tank." Because we have to assume that the adversary is inside your network. Increasingly that's not just a Vault problem. Every security tooling infrastructure assumes that an adversary has been able to breach the perimeter in some shape or form — so environment is very important. Where is the attacker coming from?

Resources

What is the attacker bring to the table relative to their ability to launch attacks on Vault? This area has been very seriously impacted over the better part of the last 10-15 years — as security automation and tooling technology has become a lot easier to use and more freely available.

Very real and legitimate suites, like Metasploit — or potentially dubiously legitimate suites focused on simulating ransomware attacks or cryptanalysis attacks — have been used by adversaries and freely promulgated on places like the dark web to allow and yield attacks on infrastructure. With Vault, we have to assume that tooling exists — but the question is how much tooling exists to be able to enable an adversary to bring to bear a lot of different types of attacks?

Sample Adversary Assumptions

Let's think about this in an example of a real-world adversary. I am protecting a company called Ellington Oil. Ellington Oil here is facing an adversary who we're going to call Acid Burn. Acid Burn here is trying to breach Ellington Oil.

Let's use this framework. Where is she coming from? Well, she is coming from likely an external source — from what our intelligence says. But she may try to install local agents through either physical or network infiltration. Expect they're coming from the outside, but they may be coming from the inside as well. We have to think about the internal threat vector.

From a resources perspective, we know that Acid Burn may have stolen privileged credentials to be able to access this information. There's going to be stale — we can force a series of refreshes on all of our credentials. But they have legitimate credentials potentially —  they might've stolen from dumpster diving.

There is no native knowledge of a systems infrastructure, but as Kerckhoffs' principle says, given enough time, they're going to get all the knowledge. If Acid Burn's going to be hitting our systems for a prolonged time, except that they'll get it. But for a short-term attack, they're likely not to have a lot of native knowledge.

Finally, we know that Acid Burn is probably a teenager in New York in high school. They don't have any cryptanalytical, high-performance computing infrastructure. They're not going to be trying to break Enigma here. They're going to be using a lot of automated tooling they found on the web or coming up with novel attacks because they've got a super-elite computer. We know the environment and the assumptions of the adversary.

What Are Their Goals?

This is important for two reasons. One, we need to understand what they're looking for effectively. Getting an understanding of what those goals are helps us to better understand how we can harden different pieces of infrastructure within Vault — versus the possible usability concerns of hardening that infrastructure.

Knowing what data they're trying to exfiltrate will get us a good understanding of what kind of attacks are likely to levy against us. In Vault — as well as within frameworks like FIPS 140 and the NIST SP 800-140 x-ray set of documents — this allows us to better define how we harden what we call cryptographic security parameters or CSPs.

If I know what an adversary's goals are in breaching Vault, I can know what gates they have to travel to — to get to the keep's inner node. It's very important for us to define those CSPs — those gates they have to transit through — and the various security controls within Vault, and know-how we can balance usability and security there.

Let's go back to Acid Burn. Burn is trying to cause chaos to cover an attack to exfiltrate data. We know that she and the rest of her cohort are trying to steal a garbage file from somewhere within my infrastructure — or some other piece of information. Burn's going to likely go on a rampant shotgun blast through my infrastructure. This is not a low and slow style attack. This is throw everything and the kitchen sink — and cause as much chaos to cover the real attack.

We're not sure what she and her cohort are trying to steal, but because she is purposely going to be using this as a diversionary attack, anything that IDS and IPS is likely to pick up on, Burn's going to do. Burn doesn't care about not being found. Burn hasn't spent millions of dollars on a type of a breach or a piece of exfiltrated data that can safely navigate into Vault — and that would be undetectable by IPS or IDS. So, expect Burn to come in loud, basically.

Adversary Capabilities

Finally, we use the knowledge of who the adversary is, what they're looking for, their perspectives, and the various assumptions that we have about them — to understand their capabilities. This is basically what are they going to throw at us, given where they're coming from, what techniques and tooling they have, what is this type of attack that we're thinking of? This has an implication on how you harden those CSPs — those critical security parameters within Vault — or any other type of system.

But this also has implications on where you should insert log-in capabilities and understanding how an adversary will breach your network. To be blunt, if you are building security software that's going to stand the test of time, you also need to understand that there may be times when an adversary is able to breach part of your security controls.

In those kinds of situations, you want to gather as much information as you can so that future security researchers — or law enforcement — can deduce information about the adversary. Ideally, attribute and go out and arrest them and help you harden your security tooling over time to withstand novel types of attacks.

The Attack

Burn's coming in hot. She does not care about being found out. She is here to blast you with everything that she's got. What does that mean? Well, her attacks are likely going to be caused to threaten system stability. Because Burn doesn't care about being detected, and she is wielding everything that she can find within Kali Linux — every potential attack that she could wield against you is meant to cause as much gout as possible — so except system-threatening attacks.

Likely, it's going to hit the system resources that would be used. Especially in Vault's context, Burn maybe trying to overload your storage engine, or otherwise create too many system-consuming workflows that we should be able to defend against.

Burn is likely to deploy tool-based malware. Burn does not have a lot of her dedicated cryptanalytical capabilities. She may not have a lot of resources, but she's got the internet. She's probably going to grab every possible dropper package that she can and tooled malware that she could insert into our automation suite to be able to throw against you — expect that. Expect known styles of attacks that are likely going to be a lot of them coming at you like a shotgun blast.

Finally, Burn may try to employ cryptanalysis. But because she doesn't have any dedicated hardware or dedicated cryptanalytical capabilities of her own, it's likely gonna be stuff that you've already seen before. Jack the Ripper, or other types of styles of cryptanalysis — rainbow table attacks — where applicable. This is the stuff that we need to defend against — not the quantum computing style of attacks.

Using Adversary Models

We've identified our adversary, Acid Burn. We know where she's coming from roughly. Through research, we've understood what her goals are, what her capabilities are, and then finally, what attacks she can wield against us.

Knowing this, when we start to design features or enhancements within Vault, we would think about this in the context of, "Given all of this, how well do we stand up against Burn?" Pretty well.

But there may be times when we need to defend aspects of our infrastructure where the critical security parameters — in this case, Vault — may require a certain set of system resources or otherwise capabilities that would be impacted by things outside of our cryptographic model.

This is why we developed Resource Quotas. If I'm Ellington Oil here, I would want to do one of the things that probably turn on resource quotas. So if Burn goes in and tries to overload a node for Vault, that this doesn't jeopardize system stability for the rest of the cluster — or the system stability for my multi-cluster system if I'm using Vault Enterprise as a whole.

Things that I can also do. Well, I can allow for short term TTLs to force Burns to re-authenticate. This is one reason Ellington Oil is looking into our security, production hardening guidelines — why we include stuff like this in the security hardening guidelines. It's so that we try to think about being Ellington Oil. We try to think about the attacks that Burn is going to levy against them. And we try to give guidance on how you can better protect against adversaries like Burn.

Finally, given that Burn here is not going to be coming in with any special cryptanalytics. We advise Ellington Oil to be able to ensure that the cryptographic barrier that they have is resolved. We'll take care of all that heavy lifting ourselves. We probably don't need to plug into a quantum computer to draw atrophy or anything else like that.

Basically, don't worry about that set of controls — the John, FIPS 140-3 style of controls. Focus on making sure that Burn's tooling that she got off Kali and the various definitions and things can't be used well against that to breach Vault in this case.

A World of Adversaries

Acid Burn is not the only adversary that we are defending against. Again, one of Vault's challenges is that we have to defend advance an entire world of adversaries. We try to deal with this by respecting the fact that all of these adversaries are going to come in and attack our data.

Given that security means different things to different users — allow you, as a user — the capability to tune up and tune down the security level within Vault. If you need to defend against adversary Burn, you can crank it up to two institute resource quotas, call it a day. If you need to defend against Elliot from Mr. Robot — or God forbid, defend against Ed from Cowboy Bebop — now we need to start thinking about seriously implementing dedicated cryptanalysis.

Or really scary situations where maybe Vault alone isn't good enough — like you have physical security like Aiden Pierce kicks open the door to your datacenter and starts doing things that hackers only do in movies. Well, okay. Maybe we should implement new infrastructure that allows Vault to better integrate with hardware or other kinds of infrastructure around you.

This is why we have capabilities within Vault that are built for these kinds of situations. Things that — unless you were in certain types of governance and regulatory compliance requirement environments — you probably will never need to use, or you might not even a heard of in Vault. But they exist to allow you to defend against Aiden Pierce — or someone else who is this very advanced physical adversary that also will try to breach your infrastructure.

It's not just something Vault has to think about. Security is a comprehensive topic. Frequently when we build these tooling and protections within Vault, they will rely upon other security tools. This is why we have partnerships as a company with a number of other companies in the hardware security space, like HSMs, companies that focus on physical tokens like Yubico, etc. Our goal in Vault is to allow you to dial-up and dial-down based on what you know about your adversary, but we have to build a security infrastructure that can defend against all this.

Cryptographic Barrier Architecture

Probably the greatest example of how this contributes to feature design is when we look at the cryptographic barrier in Vault. For those of you that are unaware, Vault's cryptographic barrier is a mechanism that protects all the security at rest within Vault with encryption.

Whether it's how the data is accessed — when you have a token that allows you to access data — what protects that data at rest such that an adversary can't just jump into your storage backend and steal it? The cryptographic barrier is there to make sure that the storage could theoretically be stolen — or you Vault infrastructure, regardless of whatever it is — and an adversary would not be able to breach that data.

So again, we look at this from a spectrum; what is the spectrum of possible attacks against this? From a core security perspective, we need to deter an adversary attempting to access that secure storage. They ripped out the hard drive, they run away with it. How do we protect against that?

Well, Vault is built to allow two types of unsealing mechanisms for barrier encryption at rest. The first is Manual Unseal. We split a key that unlocks that into a number of shards, and they need to reassemble a quorum. Ideally, you're protected by the number of those shards to make it hard for the adversary to be able to assemble a quorum.

But also a great way to do this is to use Auto Unseal. That's probably the most common way in Vault Enterprise that we see. Vault Auto Unseal can plug into something like an HSM. Another device that’s hardened against a physical attack or the type of adversary you perceive to be attacking your infrastructure.

This is going to protect against most types of malicious cybercriminals, adversaries like Burn, hacktivists — even like well-resourced adversaries like organized crime or even some state espionage. The barrier in Vault is built so that it provides a very high grade of security that is relatively easy to use. But at the same time can stand up to a lot of different types of attacks in such a way that you — as a user — are hopefully not yielding cryptographic material based on how you recompose that key. Ideally, you're not even recomposing that key at all. Auto Unseal is taking care of all that for you.

But there are times where Vault is going to stand up to dedicated adversaries that have cryptanalytical capabilities. They have codebreaking capabilities, computers dedicated to codebreaking. They have cryptanalysts — people that have PhDs in computer science or a deep background that can do these types of attacks. How do we stand up against them?

Well, frequently, there'll be a very explicit compliance requirement associated with these. For example, FIPS 140-3 is a very common compliance requirement in the Western world. We need to build ways for Vault to be able to align with those compliance requirements and leverage cryptographic modules where applicable.

If you're auto unsealing with Vault, we have a number of capabilities in Vault Enterprise that allow you to be able to do things like augment the entropy pool. So the adversary is attacking the system entropy to minimize the amount of possible opportunity — keys generated for encryption; we can inject entropy from even a quantum computer within Vault Enterprise today via PKCS11.

Additionally, we can also use Seal Wrap where I'm going to effectively envelope encrypt all the critical security parameters for Vault, with data that comes from an external crypto module. Because the external crypto module can be something as resolved as a FIPS 140-3 level four random number generator, we can faithfully maintain those protections down to Vault's system level.

This allows you to crank that security up to 11 and protect against dedicated cryptanalysis. Even people who are wielding post-quantum cryptanalytical attacks against you. Our goal here is to allow there to be the capability for Vault to stand up to a 5,000-cubit quantum computer — because the data that you're protecting within Vault; it's going to be there for potentially decades.

We want to make sure that a decade and a half from now, when something like that is possible, that it's not going to be something that's easily leveraged against Vault data — that it is maintaining perfect forward secrecy.

Kill Your Darlings

In summary, adversarial modeling is our technique in Vault to — given our knowledge about adversaries — protect Vault data against all possible adversaries. It is very briefly just this — learn about your adversaries, know everything you can about them, and then attack your own system with the knowledge that your adversaries would have access to.

If you build a security system well enough, it should stand up against them and stand up to attacks in the real world.

More resources like this one

4/11/2024FAQ

Introduction to HashiCorp Vault

Vault identity diagram
12/28/2023FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

3/14/2023Article

5 best practices for secrets management

2/3/2023Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones