Skip to main content
HashiTalks 2025 Learn about unique use cases, homelab setups, and best practices at scale at our 24-hour virtual knowledge sharing event. Register
News

Keynote: The Evolution of Consul & Unveiling of HashiCorp Consul Service on Azure

Get an overview of the evolution of Consul over the last few releases, plus an intro to HashiCorp Consul Service (HCS) on Azure and VMware NSX-ServiceMesh integration.

Mitchell Hashimoto, co-CTO and co-founder of HashiCorp, discusses the journey to Consul 1.6—a full-fledged service mesh released by HashiCorp this year—and then unveils and demos HashiCorp Consul Service (HCS) on Azure, a new managed service discovery and service mesh product available on the Microsoft Azure cloud.

He then invites Brendan Burns, Distinguished Engineer at Microsoft onto the stage to demo a Pong application using HCS on Azure. Finally, Pere Monclus, the CTO of Networking and Security at VMware comes on stage to introduce integration between Consul Connect and the VMware NSX-ServiceMesh.

Speakers

Transcript

At this point I'd like to welcome Mitchell Hashimoto, founder and CTO, to the stage. Thanks so much.

Mitchell Hashimoto: Hello, everyone. Before I get started, a lot of the people that worked on what Armon just talked about in Terraform, they've been working on all that for a couple of years now. I think it's amazing. If you think it's amazing as well and want to thank them, could you just give them a round of applause right now, please? Thank you, Terraform team.

It's amazing to see you all here today. I am going to be talking about Consul. I've been really excited about Consul lately.

For those of you who have been watching multiple of our events or have been to multiple keynotes, you may notice that, either by choice or by luck, I have gotten the Consul section 4 events in a row now. That's not really a mistake. I am really excited about Consul.

And why is that? Consul's exciting to me because it is chipping away at solving this very important service networking challenge. Armon explained the challenges of multi-everything, and networking really sits at the core of that.

To show you visually what I mean, when we've been talking about multi-platform, this transition has been going on for a very long time. If you zoom out to 3 or 4 decades, you could begin at mainframes, and if you find any company that's been around long enough, you'll see almost all of these still in action today.

The mixed technology, this multi-platform, multi-paradigm that's going on, is still very much active. There's mainframe to bare metal to VMs, and then recently containers and serverless. But even though containers and serverless are recent, the notion of paradigm shift and technology change in a company is not new.

The network is the glue for a multi-everything world

When you look at all of these things, they still interoperate today, they still work together today, and the one thing that has always been there as the least common denominator, the one thing that has made applications on different platforms always work has been the network.

No matter how much changes above, we've always been able, for the past few decades, to rely effectively on TCP to be there for us in order to make everything work.

Any protocols that live on top of TCP enable us to connect things like containers to mainframes. Even newer, cloud-native or cloud-first companies that started with cloud VMs, they've now had to endure the paradigm shift to container platforms and serverless.

When cloud was coming up, those companies got to point and laugh a little bit at the companies stuck with physical datacenters, but they're now being pointed and laughed at for being stuck with cloud VMs as people move to containers. And they're realizing the very real business challenge of having mixed technologies, and the importance of the network underneath to make this all work together.

When we try to use the network as a solution for these mixed technologies, we run into 3 key challenges:

  • Naming
  • Authorization
  • Routing

Network challenges: Naming

Naming is the most simple challenge. It's the one you run into first, typically. It's really basic. It's just, What do you call something in order to address it? The web service needs to talk to the upstream auth service. How does it tell the network that it wants to talk to that thing?

Traditionally, we used IP addresses, and particularly static IP addresses, or virtual IP addresses to make this a reality, to give something a fixed name, so that when we say, in this example, .51, we know that we mean to go to the auth service. When you had very static machines, this worked fine.

As we scaled out horizontally, in this example having multiple auth services, you still want some sort of fixed name to make this consistency. Historically, in a traditional static environment, as Armon showed, you would reach back into load balancers for this solution.

These load balancers are doing very little in terms of what they're capable of, but the primary thing that they're solving is the naming challenge. Load balancers don't change very often. Their config does, but the machines don't, so you're able to keep this static address and route it to these more dynamic backends that are coming and going.

As we look forward into a Consul-oriented world, we've begun to replace these IP addresses with the actual logical service names instead. Instead of asking for .51, you explicitly use DNS to ask for the auth service, and Consul dynamically figures out where that goes, and you're able to eliminate load balancers for this particular solution.

This is generally what people adopt Consul for first. We've seen multiple talks—there was one at HashiConf EU a few months ago and last year—from our own users and customers that this is why they initially adopted Consul, and they're able to have considerable cost savings doing just this.

There was a tweet just a few weeks ago about somebody who is totally full cloud, and just adopted this to eliminate application load balancers between services, and was able to drop their networking costs by 90%.

So this is the bread and butter of why you initially adopt Consul. We describe the evolution of adoption maturity as crawl, walk, run. This is the crawl step, to get toward service mesh and some more complex solutions.

Network challenges: Authentication

Once you have the ability to name something, you could address something, you could reach it, but how do you control access to that thing? The authorization problem comes up.

In a world where we're viewing everything as IP addresses, it makes perfect sense to use firewalls that operate on IP addresses as the mechanism to enforce this type of authorization. Using firewalls, you would do something like, "Allow .51 to .104," and then that way protect access to your authorization services, or whatever is back there.

In a more Consul-oriented world, we are now operating authorizations, which we call intentions, through logical service names. In the Consul world you would say, "Allow web to auth." The IP addresses disappear, and authorization happens.

Network challenges: Routing

The last challenge is routing, and routing is really 2 aspects. It's the load-balancing aspect of sending it to the correct instance, and it's also the traffic-shaping aspect, which is resolving it to the right subset of the services you have running.

The load balancing is pretty simple; it's literally balancing load. And the traffic-shaping aspect is, maybe you have V2 that you want to send 10% of traffic to, and so on. These are really important challenges to solve, and traditionally, these are done because we already had a load balancer with a load balancer, but it introduces quite a number of load balancers in the system, and complexity as well.

With Consul, we're able to solve this with our agent-based model, with a sidecar model, using the service mesh functionality in Consul.

If we zoom out now with these 3 challenges and look at a traditional datacenter, it looked like this. There was a very clear north-to-south traffic path.

You would have an external firewall. Global internet traffic would come in the front door, which is the firewall. That would go to some frontend load balancer that distributed load to the web tier. They would do some processing, and then it would go back to an internal backend system, like a database, and usually there was another firewall there.

These tended to be monolithic applications, they tended to be on VMs, or in physical datacenters, and they didn't move very often; they were fairly static.

In this sort of environment, manually handling updates, manually updating firewall rules, using IP addresses, is a perfectly reasonable thing to do. When there's not a lot of change happening, when there's not a lot of dynamism, you don't need automation.

But if we zoom out a bit more, and we move forward into a more modern deployment scenario, we start to see something like this where you might have adopted a new datacenter, or a new location. And because this is a new thing, you've also decided to re-platform onto some new technology, such as Kubernetes. And all the while, you still have this existing system, and maybe you intend to move things from the existing system to the new, but that's never an atomic, instantaneous event.

There's always this period of time where there are multiple technologies that exist, because the existing system's there. It's very common that you have to reach back either for user data, or billing, or identity, or anything, to come back to the old datacenter.

So you have to set up something like VPNs, or SD-WAN, or direct connects. There's some mechanism you need to connect these 2 things. In addition, you have a different point of entry into the second datacenter.

There are now 2 ways for the internet traffic to reach you, and so the notion of a perimeter is starting to disappear, and more and more this is very complex.

This happens fairly quickly for a company, and even if you were a cloud-first company, if you existed before the rise of containers or something, this is now a challenge for you. You have the cloud VM and the container challenge, and you're trying to bridge these gaps. Or if you're just a growing company, and you're moving into multiple regions, this quickly becomes a challenge.

Let's zoom out one more time and see the end state of high-scale complexity that comes about, and the importance of networking in that environment.

Complexity and the importance of networking: A true story

In this scenario, it looks like complete chaos, but this is really more the rule than the exception, as I found talking to hundreds of companies in the Global 2000. I want to describe how this happens with a story, an anonymized story, but based on facts.

Let's imagine that you're a company that was cloud-first. To keep it fairly modern, you didn't even have a physical datacenter. You started cloud-first about 8 years ago. You're cutting-edge. You went all in on, let's say, AWS as your cloud platform.

So, for a long period of time, you invested heavily in tooling there. You have all your VMs there, everything is homogeneous, and you're able to use all the tools that AWS provides to have a very clean deployment.

At some point in the past 8 years since that company gets incorporated, the rise of containers and container platforms starts to happen, but things like Kubernetes don't appear immediately. The first thing that happens, there are things like ECS, or things like Docker Swarm, or other technologies come up.

Being the cutting-edge company that you are, you quickly adopt these things, but you still have VMs, and you're mixing them together. But it's still manageable, though you have 2 systems.

You continue growing as a company, and suddenly you have global distribution, so you introduce new datacenters. You're solving those challenges of having multi-region capabilities. And at the same time, Kubernetes is standardizing the container platform world.

So you start adopting Kubernetes. Then function as a service. You can see how this goes.

Let's assume now, after 8 years, you're a billion-dollar company, you're doing very well, and as a business, at an executive level, you decide that the best thing for this company is to acquire an adjacent startup company that's much smaller than you, but doing well. So you go to acquire this company for very good business reasons, but since they're newer, they've decided to go all in on Kubernetes on GCP.

So you bring this company into the fold, and suddenly you went from this very pure AWS-VM-first, cutting-edge company to having this as a very real reality.

That happened to a real company, a cloud-first company. You can imagine if you just stretch it to any other company that's been around more than 10 years, more than 20 years, that this is spanning clouds and spanning to their private datacenters. It includes a lot more technologies.

This is a very real challenge, and like I said, from what I've seen, is much more the rule than the exception.

Consul and the service networking challenge

Two years ago at HashiConf here in the US, we announced Consul 1.0, and after announcing 1.0, the Consul team has been focused on solving this broader service networking challenge.

Release by release we've been chipping away at this, and at HashiConf EU a few months ago, we announced Consul 1.6, which had the largest user-facing changes toward solving these problems. And we announced 2 major features that really took it to the next level toward solving the service networking challenge. I want to quickly review those.

The first feature we released was something called mesh gateways. Mesh gateways are a solution that allows us to automatically have cross-network service mesh.

When you have 2 networks that are either in multiple datacenters or VM-to-container, or just have completely overlapping IP addresses, you're able to automatically make these connections work.

We do it in a way that maintains full end-to-end encryption, because security is an expected and necessary aspect of the network today, and our solution always spans every available platform: Kubernetes, VMs, physical, etc.

The way service networking works, it's such a core technology to bridging multiple paradigms that it's only a real solution if it solves a heterogeneity problem. If you solve service networking or service mesh for a single, homogeneous platform, that's solving a problem, but it's not solving the problem that we're seeing businesses have. The problem is bridging together multiple heterogeneous systems, and that is what we are focused on solving.

So, if we look at this as a diagram, here are 2 networks. The most common reason you have 2 networks with the exact same IP address space is generally Kubernetes. So you have 2 networks. You have an API service in one, you have a database in the other, and they want to talk to each other.

Because they have overlapping IP space, you can't just set up a VPN. You have to so something a little more creative. It's possible, but challenging.

With Consul, you just put a mesh gateway on the edge of each of your networks, and the connections just happen.

The API service that's requesting the database doesn't even need to be aware that the database is in another network. It could be a datacenter or anything. The API just says, "I want to talk to the database." Consul figures out how to do that routing, whether it's allowed, and so on.

This is really important, because I don't think developers should really care what the topology of the underlying system is. This gives freedom to the operators and network engineers underneath to move applications and do what's best for their deployment, and things still work at the application level.

If we look at a slightly more zoomed-out example, this is how it would work across container platforms, cloud VMs, and physical datacenters. You would just put a gateway on the edge of each of your networks, and now all of them could connect, even over the global internet, or multiple regions, etc.

That was one aspect of the service networking challenge. I like to break service networking into 2 different categories. There's the networking part, which is, How do you get the byte from Point A to Point B? The byte just has to get there. And the security challenge is also part of that. You want the byte to get there encrypted likely.

Networking and progressive delivery

Then there's the other aspect of service networking, which is the category of progressive delivery. This is a term RedMonk founder James Governor came up with. Progressive delivery is an umbrella that encapsulates features such as canary, feature flags, rolling deploys.

The business challenge it's solving is the idea that we need to have a mechanism to limit risk when rolling out new software. This is one aspect of limiting risk. You probably also want testing and different environments and CI/CD and so on, but bugs get to production. They always get to production.

So how do we limit the blast radius of deploying these applications? And as we're reaching the point where it's fairly mainstream for companies to have hundreds and thousands of microservices that are going out of production, how are you supposed to do this safely without bringing everybody to their knees? The features underneath progressive delivery are the way you do that.

With Consul, we also announced Layer 7 functionality to do HTTP and gRPC routing, traffic splitting, and custom service resolution. All of these solve the progressive delivery aspect of service networking.

To show this in a diagram, prior to Consul 1.6, when you asked for the web service, this is what it looked like. Consul would just look up the web services and return them to you. In Consul 1.6 you can now introduce logic in between these. All of them are optional, but this is what the full chain would look like. You could do routing, traffic splitting, resolution.

To run through a quick example of what this does, at Step 1, if we ask for the web service with a /admin path, that goes to the admin service. And then when we're in the admin service, we want to split 10% of the traffic toward V2, and 90% toward V1.

So we split toward V2, and finally the resolution step informs Consul, "How do I resolve the V2 admin service to an actual instance?" There we're able to say, "All the services named 'admin' that have a metadata aversion equal to 2 are that service." Finally, the green boxes on the diagram are what Consul determines are what you want to talk to, and it returns those all the way out.

You can see how this functionality lets you do progressive delivery.

All of this is in Consul 1.6. We announced that at HashiConf EU, it reached stability and final release about a month ago, and this is all out already. But we can see the importance that networking plays in this multi-everything, multi-platform adoption, and the critical functionality that Consul provides.

It's very important that Consul is as easy to adopt as possible. The Consul core team has been doing really good work every single release to make that easier. If you look back at the past few releases, we've made securing by default easier, we've made introducing ACLs easier, we've made upgrades a little bit easier.

Introducing HashiCorp Consul Service on Azure

Little by little we've made things easier, but we always wanted to do more, and so we went to the drawing board, and it's like, "What can we do to make Consul easier to adopt?" An obvious answer stood out, which is, "We could run it for you."

So today, I'm excited to announce the HashiCorp Consul Service on Azure. HSC on Azure is the easiest way to launch and integrate service discovery and service mesh. It is Consul as a service, and it is also the first fully managed service mesh as a service.

As a service, you get the features you would expect. We provision Consul clusters for you automatically, we handle backup and restore, we handle upgrades, we handle scale up and scale down. All of this is consumption-based pricing, time-based pricing; it's not a big up-front commitment.

By integrating very deeply with Azure, we're able to keep Azure native identity and billing. With this release, you don't need to sign up with us; you can use an Azure identity, and our costs show up on your first-class Azure bill, and are billed directly through however you pay Azure.

All of this is HashiCorp-managed, so when you create a Consul cluster, when a backup is necessary, when we perform an upgrade, all of that control plane work is handled through software that's built and run by us.

It runs in our datacenters, or cloud datacenters, the software's run by our engineers, etc. We've hired and built out a HashiCorp SRE team that is handling the management of all the customer clusters. But all the while we've partnered with Microsoft to deeply integrate, from sales to technical implementation.

Here’s a diagram of how this works. You would find the HCS application, and you would ask us to create it. That comes back to our software, and we handle the creation, and the creation of the cluster shows up in your account in a managed group. We have access to modify the resources just in that group, but by putting it into your account you could easily do VNet peering, or handle any way to get access to those Consul clusters.

It's way easier to see it with a demo, so I'm going to show you a video of what this looks like. What we're going to start with is the page you would see after hitting Create Consul Cluster.

You'll notice this is right in the Azure portal. It feels just like a first-class system directly in Azure.

This is the first thing you see. One of the first things is selecting the region. We are supporting, right away at launch, every single public Azure region.

You would select your resource group that you want to launch this in. You could create a new one, which is what we're going to do here, an empty resource group to launch all the Consul resources into. Then you hit Next, and you're presented with a bunch of Consul settings.

You could name your cluster, name the logical datacenter, choose the Consul version you want to deploy, determine your backup and upgrade behavior. Do you want automatic upgrades? Do you want manual upgrades? What interval do you want us to take snapshots in? There's a number of options here that you could choose.

You go ahead and choose them, hit Next, we validate the configuration for you, and then once it's valid, you could hit Create. That initiates a deployment, which we'll click into here. The deployment takes a few minutes; we fast forward it in the video.

Soon you'll see an HCS instance show up right here, and then very shortly after that, that'll become clickable. When you click into that HCS instance, you land on an "HCS on Azure" dedicated page, and you'll see in the sidebar a bunch of Consul-specific things.

The first one we look at is the Consul clusters list, this list for all the clusters you have. You can add more, you can see metadata about a diversion, and so on. If you select the cluster, you could do a bunch of manual steps, such as manually forcing a snapshot or manually performing an upgrade.

And then in the left-hand side, you'll see a link to the UI as well. So we click that, and when you click the UI, it loads the Consul UI directly in the Azure portal.

This is a lot fancier than it looks, because what we're doing here is, when we deploy a Consul cluster, we deploy everything secured by default. TLS is fully set up, ACLs are fully set up, gossip encryption is all there.

When you load this UI you'll notice that there's no token entry required or anything. Since we integrate deeply with Azure, when you click that button, we're able to use your Azure identity to prove that you have access to the Consul UI, and log you right in. So you get access directly there.

You could also choose optionally to put the UI on an external endpoint if you want to handle access yourself, or you could keep it private and only access it through the Azure portal.

The Microsoft connection

This shows you the quick example of HCS on Azure. Like I said, we've partnered very deeply with Microsoft. We're excited by that, and to talk more about Microsoft and Consul, I want to invite to the stage Brendan Burns, a distinguished engineer at Microsoft.

Brendan Burns: Thanks, Mitchell. It's been really great to develop this partnership with HashiCorp and see the benefits that the combined forces of Hashi and Azure can bring to our joint customers.

Because the truth is that when you're building applications in the real world, it's a really complicated, hybrid environment. You need the ability to interact with both existing services that might be running on VM-based infrastructure, with newer microservices that might be running in the Azure Kubernetes service. You need to be able to have all of this stuff work on-premises as well as in the cloud, across a wide variety of environments.

Most applications that we see out there aren't the pretty diagrams that you see in the cloud-native application portfolio, but rather a complex collection of pipes going in different directions.

The combination of Azure's technology and Consul can make a really great way of building those applications together.

To give a demonstration of this, I'm going to go through our legacy application that we're using Consul and Azure to help modernize, and that is Cloud Pong.

Cloud Pong has been a successful legacy application. Well, it started out as a new application, but it became a legacy application based on virtual machines, hard-coded IP addresses between the different players.

We’re pretty happy with it, but our agility just isn't where it needs to be. So we're going to microservice-ify that thing. Like in a recent Dilbert cartoon, we had the pointy-haired boss come in and say "Kubernetes" to us. So we're Kubernetes-ifying our service

But as we do that, the question becomes, "We've got these services over here. Kubernetes is cloud-native, and that's wonderful. But it’s still VM-based for our second Cloud Pong player, and we have to do this migration successfully. How can we make sure that we bridge the world of dynamic services and Kubernetes with a more legacy world of virtual machines?" And it turns out that to do that, Consul is a great solution.

An open service mesh implementation

Consul is something that we see with our customers a lot, specifically around this notion of bridging from a mesh that's in Kubernetes to a mesh that is in other environments. But the question is, How do I drive this? How do I make this work in a Kubernetes-centric way, and how do I ensure that, if I want to use other service mesh implementations, it all works, and that I'm learning and my tools can use the same thing?

To do that, a while ago, at KubeCon in Barcelona, we partnered with HashiCorp and others to come up with this service mesh interface specification. It's an open specification; it's out there on GitHub. You can use this open specification to drive your service mesh so that you're not bound to any particular implementation.

You can use Consul if that works great in the managed service on Azure. If you're in a different environment with a different implementation of service mesh, you can use the exact same Kubernetes objects to configure and set up your application.

To give an example of this in the real world, I've deployed exactly this architecture out onto Azure using Consul, using the Azure Kubernetes service, using a legacy VM, and I want to give a demo of this.

In order to do that, we're going to have a demo of Cloud Pong. Of course, you really can't play Pong without another player. I understand we have a volunteer from the audience who's going to come up here, and we'll do a little Pong on stage for you.

All right, I'm going to set up my game here. Many thanks by the way to the folks who helped me build this. And over here, we're going to set up Player 2. It looks like Player 2's not connecting. She's going to be at a disadvantage. I'm going to win.

No, I'm kidding. It turns out, the reason for this is because Consul, as Mitchell mentioned, is secured by default, and if I go over to the Consul UI here, and if I hit refresh, the intentions are there. Now, if I go back to my Pong, we are ready. Are you ready? It's arrow keys up and down. Space bar if you want to hit the ball. It's my serve.

Guest: OK.

Brendan Burns: All right, here we go. It's thrilling. I should've had a play-by-play announcer.

Guest: It's not moving.

Brendan Burns: Try holding it down.

Guest: This is rigged.

Brendan Burns: It's rigged? No. All right, let's try it now. No? Well, I'm going to have to apologize. I really didn't intend for me to be the only one with controls that worked, but I guess that's the situation we find ourselves in. All right, well, thank you so much.

Guest: Very welcome.

Brendan Burns: Thank you. We're going to have Mitchell come back onto the stage and tell you more. Thanks.

Mitchell Hashimoto: Well, that demonstrates the importance of progressive delivery, I think.

Brendan showed a good example of how you could integrate Consul in a Kubernetes and cloud VM world. This is a super common deployment today. But I also showed earlier the addition of private datacenters, and many private datacenters are out there running VMware technologies.

Going to private datacenters with VMware

We want to make it as simple as possible for Consul to enable service mesh across all of these environments. To talk about this next step of going to private datacenters with VMware, I'd like to invite onto the stage Pere Monclus, who is the CTO in the Networking and Security Business Unit at VMware.

Pere Monclus: We saw this morning a discussion with Armon that everything was going multi-cloud, multi-application, multi-service. At the same time, we are seeing from Mitchell the idea that networking is much more than IP address and connectivity of switches and routers

It's evolving toward providing what we call "service communications," and within this new world, what you are seeing is that customers are coming to the industry and essentially demanding, "How do I connect these environments across multiple silos?"

Today, every time I deploy a Kubernetes environment, every time I create a cloud-native app, every time I go to a public cloud, essentially I have a microcosm of soft operational models, security models, identity models, connectivity models. What happens is it's very difficult to eventually operate the whole thing as a cohesive system.

We started working within a group of companies, and we started realizing that we couldn't repeat some of the patterns of the past, the notion of saying, "We have an environment in private cloud, in public cloud, in Kubernetes, and they are isolated, managed by different teams."

We started discussing this notion of, How do we understand what the service is across heterogeneous environments? How do we connect a microservice in one side to, let's say, a VM on the other side? And it was going back to the fundamentals of what connectivity is.

This is the evolution, as we go to service networking, that we were discussing with Mitchell: this notion that it's not anymore about connecting from IP A to IP B, but rather how to understand identity across environments. How do we understand end-to-end connectivity? How do we understand policy? How do we understand service discovery?

For that, we started working in defining, in an open standard with open source with a group of companies, how to exchange information between 2 disparate measures in a way that, from an operations point of view, they could work together.

There is a session today in the afternoon that will go into the details of how Consul Connect federates with a service mesh project that we have at VMware, NSX Service Mesh, and how you see an end-to-end connectivity on those. That's an open framework that we are working on within the community, with participants like Google and other players, that allows you to stretch the notion of this service networking, this service mesh, across multiple environments.

Provided we do that, we achieve these end-to-end service meshes, what we call enterprise-class service mesh. The idea is that now we can discover, monitor, control, and have consistent policies across all the environments that you may have from a service point of view.

Within that, though, when we're talking to our customers, we are seeing that this was potentially not enough. We are very focused on how a service interoperates with another service, but as we go through these layers of communication, as we move from traditional networking to how microservices communicate, the picture is much more complicated.

The authentication question

Now, as we get inside the semantics of the application, what we have is that services are there to provide some useful value to the users that connect to their service, the authentication aspect of, What users connect to what service?

Eventually we end up writing to an object. Within this evolution of networking, service networking, and communications, what we are building at VMware is expanding the meaning of what you connect from, in this case, a service, to something much more broad, like how a user connects to a service, and how this service on behalf of that user writes into where that is stored.

That allows us to spread this mesh across multiple computer environments, multiple clouds, but at the same time extend it to these 3 principles in a way that, when we discover what services we have, we by definition discover what users authenticate to those services, and what objects are being written to, in a way that, when you have the visibility into the SLO that you observe from a microservice, you could say, "What's the experience that the user is seeing from that entity?"

As you go to the control, you could start thinking in terms of, Why are users in Europe writing into objects in the US? Is this a GDPR violation?

The notion of going to the service mesh concept is a fundamental transformation, because now it brings capabilities that before were hidden in multiple layers of infrastructure. Now you can get closer to the business policies of what you intend to do, and use the deeper semantics that service networking has into the application to provide that.

Of course, this is not just about how to create connectivity across cloud and AP, or connectivity across multiple Kubernetes clusters. It’s, How do I extend whatever service mesh concept I may have in the cloud-native wall? How do I extend it into virtual machine wall? How do I extend it into the physical mainframes and bare metals?

To that extent, we are of course integrating with different components of the VMware family and different partners outside on how to make sure that we can communicate the current service networking of the cloud-native wall to the previous one that Mitchell was discussing about load balancers and IP connectivity.

To summarize, we are very excited to be part of this collaboration with HashiCorp, because in this multi-cloud, multi-service transition that we are going through, the only way that we are going to satisfy the requirements of an end-to-end security model and an end-to-end connectivity model, is if we start working, cultivating, and molding these frameworks about how to federate environments to solve end-to-end problems. Thank you.

More resources like this one

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

1/20/2023FAQ

Introduction to Zero Trust Security

1/4/2023Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

12/31/2022Presentation

Canary Deployments with Consul Service Mesh on K8s