Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
News

Interview: Armon Dadgar on zero-trust networking

HashiCorp co-founder and CTO Armon Dadgar talks with SiliconANGLE's Jeff Frick about zero-trust networking at the 2018 PagerDuty Summit.

PagerDuty Summit 2018 is PagerDuty's signature conference in San Francisco, focusing on monitoring, observability, and general DevOps practices.

Speakers

Transcript

Jeff: Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at PagerDuty Summit in the Westin Saint Francis Union Square, San Francisco. We're excited to have our next guest. This guy likes to get into the weeds. We'll get some into the weeds, but not too far in the weeds. Armon Dadgar, he's the co-founder and CTO of HashiCorp. Armon great to see you.

Armon: Thanks so much for having me, Jeff.

Jeff: Absolutely, so you're just coming off your session, so how did the session go, what did you guys cover?

Armon: It was super good, I mean, I think what we wanted to do is take a broader look, and not just talk too much just about monitoring, and so the talk was really about zero-trust networking—the what, the how, the why.

Jeff: Right, right. So that's a very important topic. Did Bitcoin come up, or blockchain, or were you able to do zero-trust with no blockchain?

Armon: We were able to get through it with no blockchain, you know, thankfully I suppose. But, I think the gist of it is when we talk about the challenges that are still at that nascent point where people are like, "hey zero-trust networking, I've heard of it, I don't really know what it is," or what mental category to put it in. So, I think what we tried to do was not get too far in the weeds as you know I tend to do, but start high-level and say, 'what's the problem?'

I think the problem is we live in this world today of traditional flat networks, where I have a castle and moat. I wrap my data center in four walls, all my traffic comes over a drawbridge, and you're either on the outside and you're bad and untrusted, or you're on the inside and you're good and you're trusted. So, what happens when a bad guy gets in? It's this all or nothing model.

Jeff: But now we know, the bad guys are going to get in. It's only a function of time.

Armon: Right. And I think you see it with the Target breach, the Neiman Marcus breach, the Google breach. The list goes on. It's like Equifax. It's a bad idea to assume they never get in.

Jeff: So assume they get in, so then if you know the bad guys are going to get in, you gotta bake that security in all different levels of your applications, your data, all over the place.

Armon: Exactly.

Jeff: So what are some things you guys covered in the session?

Armon: I think the core of it is really saying—how do we get to a point where we don't trust our network, where we assume the attacker will get on the network, and then what? How do you design around that assumption?

What you really have to do is push identity everywhere. So every application has to say, "I'm a web server, and I'm connecting to a database, is this allowed?" Is the web server allowed to talk to the database? And that's really the crux of what Google calls BeyondCorp, what other people call zero-trust networking, is this idea of identity-based systems. Where I'm saying—it's not IP1 talking to IP2, it's web server talking to database.

Jeff: Right, right. Because then you've got all the roles and the rules and everything associated at that identity level.

Armon: Bingo. Exactly. Exactly. And I think what's made that very hard historically is when we say—what do you have at the network? You have IPs and ports. So, how do we get to a point where we know one thing is a web server and one thing's a database.

I think the crux of the challenge there is three pieces.

  1. You need application identity. You have to say this is a web server, this is a database.

  2. You need to distribute certificates to them and say—you get a certificate that says you're a web server, you get a certificate that says you're a database.

  3. And you have to enforce that access. So everyone can't just randomly talk to each other.

Jeff: Right. Well then what about context too, right? Because context is another piece that maybe somebody takes advantage of and has access to the identity but is using it in a way, or there's interactions atypical to what's expected behavior. It just doesn't make sense, so context really matters quite a bit as well.

Armon: Yeah, you're super right, and I think this is where it gets into—not only do we need to assign identity to the applications, but how do we tie that back into rich access controls of who's allowed to do what, audit trails of—okay, it seems odd this web server that never connects to this database is suddenly doing so out of the blue. Why? And do we need to react to it? Do we need to change a rule? Do we need to investigate what's going on? But you're right, that context is important of what's expected versus what's unexpected.

Jeff: Right. Then you have this other X-factor called shared infrastructure, and hybrid cloud, and I've got apps running on AWS, I've got apps running at Google, I've got apps running at Microsoft, I've got apps running in the database, I've got some dev here, I've got some prod here, that adds another little X-factor to the zero-trust.

Armon: Yeah, I think I aptly heard it called once, "we have a service mess on our hands." We have this stuff just sprawled everywhere now. How do we wrangle it, how do we get our hands around it? So, as much as I think "service mess" is a play on the language, I think this is where that emerging category of service mesh does make sense.

It's really looking and then saying—I'm going to have stuff in private cloud, public cloud, maybe multiple public cloud providers. How do I treat all that in a uniform way. I want to know what's running where. I want to have rules around who can talk to whom. That's a big focus for us with Consul in terms of how do we have a consistent way of knowing what's running where. A consistent set of rules in terms of who can talk to whom. And do it across all these hybrid environments.

Jeff: Right. But wait, don't buy it yet, there's more. Because now you've got all the APIs. So now you've got all this application integration, many of which are with cloud-based applications. So now you've got that complexity and you're pulling all these bits and connections from different infrastructures, different applications, some in-house, some outside, so how do you bring some organization to that?

Armon: That's a super good question, and if you ever want to role-change, take a look at our marketing department, you've got this down.

Jeff: [laughter]

Armon: I would say what it comes down to is heterogeneity is going to be fundamental. You're going to have folks that are going to operate different tools, different technologies, for whatever reasons. Might be historical choice, might be just they have better relations with a particular vendor, so our view has been: how do you interop with all of these things?

Part of it is focus on open source. Part of it is focus on API-driven. Part of it is focused on, you have to do API integrations with all these systems, because you're never going to get the end user to standardize everything on a single platform.

Jeff: Right, right. It's funny. We were at a show talking about RPA, robotic process automation, and they treat those processes as employees, in the fact that they give them identities.

Armon: Interesting. Right.

Jeff: So they can manage them, you hire them, you turn them on, they work for you for a while and then you might want to turn them off after they're done doing whatever you put them in place for, but literally they're treating them as an employee, treating them with like an employee-led identity that they can have all the assigned rules and restrictions to then let the RPA do what it was supposed to do.

Armon: Interesting.

Jeff: Interesting concept.

Armon: Yeah, and I think it mirrors what we've seen a lot of different spaces, which is what we were maybe managing before was the very physical thing. Maybe it was called robot one, two, three, four. Or in the same way we might say this is server at IP1, 2, 3, 4 on our network, and so we're managing this really physical unit. Whether it's an IP, a machine, a serial number, how do we tick up the level of abstraction and instead say actually all of these machines whether IP1, IP2, IP3, they're a web server, and whether it's robot one, two, or three, their a door attach.

Now we're talking about identity, and it gives us this more powerful abstraction to talk about these underlying bits. I think it follows the history of everything. Which is—how do we add new layers of abstraction to let us manage the complexity that we have.

Jeff: Right. So, it's interesting right? In Ray Kurzweil's keynote earlier today, hopefully you saw that, he talked about basically exponential curves, and that's really what we're facing. So, the amount of data, the amount of complexity is only going to increase dramatically. We're trying to virtualize so much of this and abstract it away, but then that adds a different layer of management. At the same time, you're going to have a lot more horsepower to work with, on the compute side, so is it kind of like the old Intel, 'I've got a faster PC, it's getting eaten up by more windows.' Do you see the automation being able to keep up with the increasing layers of abstraction?

Armon: Yeah. I mean, I think there's a grain of that. Just because we're getting access to more resources, are we using it more efficiently? I think in fairness, with each layer of abstraction we're introducing additional performance cost. But I think overall what we might be doing is increasing the amount of compute tenfold, but adding a five percent additional management fee. So, it's still net we're able to do much more productive work, go to a much bigger scale, but only if you have the right abstractions.

I think that's where this kind of stuff comes in is, okay great I'm going to have 10 times as many machines, how do I deal with the fact that my current security model barely works at my current scale, how do I get to 10x the scale. Or, if I'm pointing and clicking to provision a machine, how does that work when I'm going to manage 1,000 machines.

Jeff: Yeah.

Armon: You have to bring in additional tooling and automation, think about it at the next higher level. I think that's all part of this process of adopting cloud, and getting that leverage.

Jeff: This is so interesting, just the whole scale discussion at the end of the day. Scale wins. And there's a great interview with James Hamilton from AWS, and it's old but he's talking about scale, and he talks about how many servers were sold in this—whatever calendar year it was—versus how many mobile phones were sold. And it's many orders of magnitude difference in the fact that he's thinking in terms of these types of scale, as opposed to—which was a big number in the service sales side, but really the scale challenge introduced by these giant clouds and Facebook and the like, really changed the game fundamentally in how do you manage these things.

Armon: Totally. I think that's been our view at HashiCorp. When you talk about the tidal shift of infrastructure from on-premise, relatively static VMware-centric, to AWS + Azure + Google + VMware, it's not just a change of—it's at one server here to one server there. I'm going from one server here to 50 servers, then I'm changing at every other day, rather than every other year.

So it's this order of magnitude of scale, but also an order of magnitude in terms of the rate of change as well. I think that puts downward pressure on—how do I provision? How do I secure? How do I deploy applications? How do I secure all of this stuff? I think every layer of the infrastructure gets hit by this change.

Jeff: All right, so you're a smart guy, you're always looking forward. What are some of the things you're working on down the road—big challenges that you're looking forward to tackling?

Armon: Ooh. Okay. That's fun. I mean, I think the biggest challenge is how do we get this stuff to be simpler for people to use? Because I think what we're going through is this see-saw effect. Which is: we're getting access to all this new hardware, all this new compute, all these new APIs, but it's not getting simpler. It's getting exponentially more complicated.

I think part of it is, how do we go back to looking at what are the core drivers here? Okay, we want to make it easier for people to deliver and deploy their applications, let's go back to, in some sense, the drawing board, and say—how do we extract all of these new goodies that we've been given, but make it consumable and easy to learn? Because otherwise, what's the point? It's like here's a catalog of 50,000 things and no-one knows how to use any of them.

Jeff: Right. It's funny, I'm waiting for that next abstraction for AWS instead of the big giant slide that Andy shows every year, it's just that I just want to plug in, and you figure out what connects on the backend. I can't even hardly read that stuff.

Armon: Maybe AI will save us.

Jeff: Let's hope so. All right, Armon. Thanks for taking a few minutes out of your day and sitting down with us.

Armon: My pleasure, thanks so much, Jeff.

Jeff: All right. He's Armon, I'm Jeff. You're watching theCUBE, we're at PagerDuty Summit, in downtown San Francisco. Thanks for watching.

More resources like this one

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

1/20/2023FAQ

Introduction to Zero Trust Security

1/4/2023Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

12/31/2022Presentation

Canary Deployments with Consul Service Mesh on K8s