Skip to main content
News

HashiConf Global 2021 Keynote — Consul API Gateway & Consul 1.11

Watch the announcement and demo of Consul API Gateway, a new capability coming to HashiCorp Consul, plus the launch of Consul 1.11

HashiCorp Consul has been used at hundreds of companies, big and small, to provide service networking in three key areas:

  • Service Discovery

  • Network Infrastructure Automation

  • Service Mesh

However, there’s a fourth pillar to a service networking architecture that HashiCorp has opted to delegate to third-party tools: north-south traffic ingress — i.e. traffic coming into and out of the network.

A number of tools manage north-south traffic, but one of the more popular patterns emerging is via an API gateway. For a few years now, HashiCorp customers have asked us to add this north-south ingress capability to Consul natively in order to create a smoother, simpler-to-maintain workflow for east-west and north-south traffic management in their service networks.

»Introducing the Consul API Gateway and Consul 1.11

At HashiConf Global 2021’s Day 2 keynote, HashiCorp Co-Founder and CTO Armon Dadgar announced that we are now actively working on adding this capability to Consul, and you can find our announcement and early details on the new capability in our blog Introducing the Consul API Gateway.

In the keynote, Consul Engineer Nick Ethier gives a demo of Consul API Gateway, and the keynote concludes with another announcement: Consul 1.11, which adds multi-tenancy with administrative partitions and a new installation-and-management Consul Kubernetes CLI.

»Transcript

Armon Dadgar:

Hello, and welcome to Day 2 of HashiConf. I hope everyone enjoyed Day 1. I know there was a ton of great sessions. If you didn't catch them all, they'll be available on video on demand. Today, for this session, I'm excited to talk about the future of service networking and Consul.

»Transitions Within The Service Networking World 

When we talk about what's happening in the world of service networking, there's this transition taking place from traditionally what we had, which was more static infrastructure, where our networking was a lot more host-based. We had a fixed set of machines. We knew which things were what. Things didn't change too often. 

Now, as we're increasingly adopting cloud, we're increasingly adopting containers, we're embracing serverless functions — things are becoming much more dynamic. At the same time, our approach to security is shifting towards one of zero trust, where we want to have much more explicit rules around which of our services can interact with one another.

At the same time, the application paradigms are evolving as well. Historically, we maybe had a monolithic service where many different application teams collaborated on a single application. And now, increasingly, that's moving towards a more microservice architecture where each team owns their own app. Those apps are integrating with one another using APIs over the network.

»Service Networking Pillars 

As a result, our networking is now much more complicated. This brings us to our view of service networking. Service networking is trying to address the challenges of having all of these dynamic services: Needing to discover and have traffic go between these different microservices. Needing to secure and protect the interactions between them — and govern which services are allowed to talk to which other services. 

It covers the automation of — as our applications are scaling up and down,they're moving between different nodes — how do we automate the network infrastructure that supports them? 

And lastly, how do we manage the access to these APIs as well, whether that's coming from outside of our network, whether that's coming from traditional monolithic applications into our microservices? 

So these four combined make up the pillars — as we think about it — for service networking. 

»Consul’s Origin and Evolution

Consul was designed to solve all these problems, but it started in thinking about those service discovery challenges. At the heart of it, we thought about needing a way to have all of our services register themselves so that we have a bird's eye view of all the services, where they are running. That's regardless of what platform applications they're on —such that we could enable applications to discover one another. Our web server could discover the database. The API server can discover the web service, etc., and allow these applications to talk to each other, even in a dynamic environment.

At the heart of how that works, you have Consul act as that central service catalog and enable applications to query it over many different interfaces. That could be a DNS interface, and they're querying for their upstream service. It could be a rich HTTP API. It could be through a set of intermediate middleware that’s being automated by Consul. And the idea is that this would span any type of platform — Windows, Linux, VMs, containers, bare metal, etc..

»Consul As A Service Mesh

When we move beyond just the discovery portion of it and talk about how we actually secure the access between these. It's not just that I want my web service to be able to discover my API. I want to be able to govern it and have a rule that says, is the web service allowed to talk to the API? This becomes a key building block for enabling zero trust security.

Within that realm, Consul now is really a full-blown service mesh. And when we talk about a service mesh, there are two layers to it. There's Consul, which acts as the control layer. It manages that metadata of where all the service is running. It allows you to define the intentions that say our web server is intended and allowed to talk to the API. And it lets you manage more sophisticated Level 7 controls and routing rules. 

You might want to do things like version splitting. You might want to do things like controlling which endpoints are accessible, etc. Consul defines and manages all that control information.The data plane and how traffic routes between these different services go through a series of proxies. That could be Envoy proxies, supported first class by Consul, but could also be things like Nginx. It could be solutions like HAproxy and others.

Consul integrates with a wide variety of things. At the data plane layer, it focuses on being a highly scalable control plane for managing the network. As a result, we spend a lot of time thinking about the enterprise scale requirements of that control plane. It's one thing to do this with a handful of services. It's a different thing to do it at an enterprise scale where you have thousands and thousands of services. 

»The Consul Scale Benchmark

One of the benchmarks we did earlier this year that we're very proud of is the Consul Scale Benchmark. We ran Consul on a cluster with 10,000 nodes on over 170,000 service instances and stress-tested our ability to add and remove nodes. We stress tested our ability to add and remove security controls, enabling and disabling routing between different services. And we saw that even at incredible scale, Consul's able to propagate those changes in sub-seconds to the cluster — really highlighting our focus on what it means to do networking at scale.

 Consul 1.11 Beta Release

Now, we're excited to continue pushing the edge of how we make some of this stuff simpler. 

»Kubernetes CLI

Within the upcoming Consul 1.11 release that's available in beta today, we're introducing a Kubernetes CLI. This makes it much easier to deploy and manage Consul on a Kubernetes cluster. And it brings in a bunch of these considerations around how do we harden and do this securely as well. 

»Admin Partitions

The other piece that we're announcing as part of Consul 1.11 is an additional enhancement we are calling admin partitions. This allows you to take a single physical Consul cluster and split it into logicalsub-clusters or partitions. 

Each of those partitions can then be delegated and managed by different application teams. This makes it a lot easier to enable multi-tenancy. Or, for example, you might have app teams that own their own Kubernetes cluster, and they're mapping those services into a single logical admin partition.

»Consul ECS Integration 

We realize that there's also a broad ecosystem of platforms beyond just Kubernetes that our users want to support. While we're focused on making sure Consul's a great experience on Kubernetes, we're also hearing feedback from users:  What about platforms like Amazon's ECS? We want to consume that alongside of our Kubernetes and alongside of our EKS clusters." So, we're super excited to announce the integration between Consul and ECS.

This supports both flavors of ECS — the Fargate flavor, as well as the EC2 flavor. This is now in public beta, and it'll be going GA soon. We're excited about this, and we're already seeing our customers showcase how they're spanning traditional VM-based workloads on EC2, Kubernetes-based workloads with EKS, and now ECS-based workloads as well.

»Sync Agent for Terraform Enterprise and Terraform Cloud

Now, the third pillar — when we talked about service networking — was the automation piece. And often we see a disconnect between what's happening at the application level, where developers are scaling their applications up and down maybe Kubernetes is moving it from one node to a different node, and then the underlying network, which has to support that.

Traditionally, we've seen a very ticket-driven workload. The application gets deployed, and someone has to file a ticket to manually update a load balancer, firewall, or API gateway. One thing we looked at with the network infrastructure automation use case is how we make it so that as our applications are updated, and the underlying network infrastructure is automatically updated as well. What we've done here is bring together Consul and Terraform.

We allow you to codify how the network should behave, whether that's a firewall rule, an API gateway, or the backend of a load balancer — and define it in an infrastructure as code way with Terraform and then integrate that with Consul. So the moment the application changes, we can automatically update that template, execute it, and update the underlying infrastructure. 

In this type of case, we might have an application that scales up. It registers with Consul, Consul triggers the Terraform automation, and then updates, then load balancer, firewall, API gateway automatically without any ticket involvement. We're super excited to introduce some enhancements to that. That integration is driven by the Consul Terraform Sync Agent. We're now introducing the 0.4 version of that that brings integration between Terraform Enterprise and Terraform Cloud as well.

For the thousands of organizations using that and trusting it for their compliance and governance needs, this allows them to bring the Terraform integration — such that they have the central governance and visibility of Terraform Cloud and Terraform Enterprise — and link it to Consul to enable that end-to-end automation to take place. Super excited about the use cases that's enabling.

If this is interesting, exciting, if you're already a Terraform Cloud user or you're a Terraform Enterprise customer, check out the 0.4 release of the Consul Terraform Sync. If not, it's free to sign up with Terraform Cloud and get started with it.

So we're super excited when we talk about these pillars — service discovery, service mesh network automation — that there are thousands of organizations are using Consul and powering their infrastructure at incredible scale. 

Some of these organizations include Robinhood, Pandora, Bloomberg, ROBLOX, Stripe, CloudFlare, Tide Bank, and many, many others. Many of those folks are actually speaking at HashiConf this time around. Check out their talks and learn more about how they're using it. Others have talked in the past or shared their case studies online.

One feedback item we got regularly from our existing users is Consul does a great job with those existing three pillars. There's a gap in terms of how we think about accessing our API-driven services. 

What about ingressing traffic into this Consul cluster? How does that work? Consul has a wide range of partnerships. We've integrated with a broad range of solutions for doing API gateways and ingress, but customers keep asking isn't there a more native Consul way of doing this. 

»The Consul API Gateway

We're super excited today to announce the introduction of the Consul API Gateway to do exactly that — to bring a native answer for how we should access these API-driven services as part of Consul. To spend more time talking about that and to do a demo, I'm very excited to introduce Nick Ethier. 

Nick Ethier:

Thank you, Armon. We also want to provide the same benefits of a modern networking solution that we have discussed for Consul for applications and services running outside the datacenter,: Supporting different environments and run times; access to be provided in a way that is secure, performant and scalable across teams and organizations — and providing an extensible out-of-the-box approach to help reduce time to value. We see these capabilities as fundamental to being one of our core guiding principles — being workflows, not technologies. 

The Consul API Gateway will enable customers to deploy a secure ingress and egress point for controlling access and requests from external users and services. With Consul's API gateway, practitioners will be able to allow external clients to connect to services via HTTPS, expose services externally with service certificates signed by trusted CAs, and route traffic based on traffic characteristics, such as HTTP hostnames, paths, and header values.

»Consul API Gateway Demo 

To show you a bit more, let's run into a quick demo. Before we actually get into the code, let's talk about what our goal for this demo is going to be. We want to deploy a service to the Consul service mesh and be able to expose it securely to the public with our own TLS certificate. 

Before the API Gateway, we could only do this with an ingress gateway in Consul, and this lacks a few key features necessary for us to expose the service how we want. For example, we can't configure an ingress gateway with a custom TLS certificate. 

Now with Consul API Gateway, we'll be able to establish a secure, external connection to our services with a custom TLS certificate that we can provide. Not only that, but you can integrate existing tooling, like cert-manager, to manage these certificates for you.

Cert-manager can rotate a certificate, and the Consul API Gateway will pick that up and reload it into your gateway without any downtime. To accomplish this, we're going to need to deploy the actual gateway configuration, provide it with a domain name and a port — and then we'll need to create a route that will direct traffic to our service.

»Deploy the Gateway Service 

Let's look at the code. The first thing we'll need to do is deploy our gateway service. Let's take a look at this YAML. There are a few interesting things I want to call out. First, we're going to be deploying a gateway resource and give it a name of Web Gateway. And we're going to define it as a Consul API Gateway. 

We're also going to define a few annotations to attach some DNS records to it. And lastly, we're going to define the port that the listener’s going to use. We'll also configure where the routes will be sourced from for this listener. And we'll also give it a TLS configuration to reference a certificate for this listener. This references a Kubernetes secret. And, in this case, it's been previously provisioned by cert-manager.

»Register the Echo Service 

There are a couple of things we need to define here. The first is the Kubernetes service itself. And the second is a service defaults definition. The service defaults definition is a Consul config entry, and it's used to define the default protocol that our service is going to use. This enables some features in our service mesh.

We also have the Kubernetes service object, which will point to the deployment that we'll create in a bit. Let'sget this created. Great —that looks like that worked. 

»Deployment Definition

Now we get to our deployment definition. There are two important things to note here. The first is that we're deploying a Docker image that is an echo server. The second is that we're adding the Consul annotation connect inject. That allows Consul to detect that this service needs to join the service mesh and deploy the sidecar necessary to do so. Let's deploy this to our cluster. Great —that looks like that worked. 

Let's review what we've done here. We've created a gateway, and we've created a service that references a deployment that we've created as well. We can also look at the underlying Kubernetes primitives created as a result of these. Let's look at the services first.

We have our echo service that we created manually, but we also have this web gateway service created as a result of registering our gateway. You can see that this has a cluster IP, as well as an external IP that'll get allocated because it's a load balancer.

Next, let's take a look at the pods that got created. We can see our echo pod from our deployment from the application. We also see the web gateway pod. This is the pod created as a result of creating the actual gateway and hosts that listener that we configured. 

We can also look at the actual gateway object for its status to indicate if it's operating normally. Let's do that now. I'm going to pull the JSON object of this gateway and look at its status. We can see there are a few conditions that have been set for it. The first one being scheduled. It's scheduled successfully. Then, once that pod actually started and was running — and we detected that — we have another condition that it's ready to receive traffic.

»Define the Route For The Gateway

 If we take a look at this route, we give it a name, and we need to reference the gateway that we deployed. This gateway name was Web Gateway. Then we just define the rules for this route. In this case, it's very simple. We're forwarding all traffic to a single set of services. You could have other rules here, such as routing on the path, query parameters or header values. But in this case, we wanted to keep it simple.

Just like with the gateway, we can look at the route to see its status and if it was bound to the gateway successfully. Let's do that now. If we look at the status of this route, we can see that it was permitted by the gateway — that condition's true. Great. It looks like everything's set up and working as expected.

»Review The Deployed Service 

Let's switch windows over to our browser here and take a look at the service we deployed. We've defined our gateway. We've registered and deployed our service, and we've created a route to direct traffic to it. Let's see if this all worked. We're going to gateway.nick.sh.

This is the domain we configured in our gateway configuration, and we should see our echo service deployed at this endpoint. We've got a response back from our echo service. Let's check and see if that connection's secured. We've got a TLS certificate that was provisioned by cert-manager securing our connection. We've successfully deployed our API gateway. 

Security and routing are just the tip of the iceberg in terms of features we want to provide in this tool. There's so much more that we plan to do. We're excited to have you try out a tech preview of our Consul API Gateway. We'll have that available shortly for Kubernetes deployments. Armon, I'll hand that back to you to close us out.

Armon Dadgar:

Thank you, Nick. Super excited about the announcement of the Consul API Gateway — really awesome to see it in action, and can't wait for users to dig in and get their feedback. 

As we've talked about all of these different pillars of networking, we realize there's a lot of complexity at play. We know, given this complexity, it's important to get it right, and we want to do everything we can to simplify that. So, one of the big investments we've been doing to help users tackle this and to make it simpler is to deliver it as a cloud managed service.

»HashiCorp Cloud Platform (HCP) 

We first did this with our HashiCorp Consul service on Azure. And then, we expanded earlier this year to introduce Consul on the HashiCorp Cloud Platform. The HashiCorp Cloud Platform is our way of delivering managed services of all of our products.

Earlier this year, when we first announced the Consul service, it was focused on single-region deployments within AWS. Part of that standard skew is focused on getting up and running and solving for some of those more simple environments where your whole application might be located in a single region.

»HashiCorp Consul Plus Skew 

For many Consul customers, however, they have a global infrastructure. It spans multiple regions, either for being closer to customers, for performance reasons, high availability or disaster recovery — and many other use cases 

So, we're super excited to introduce the HashiCorp Consul Plus skew, which now supports multi-region deployments. This means your application can span multiple different Amazon regions and have Consul federate between those as part of a managed service. 

»Improved Onboarding

Additionally, we got a lot of feedback on the getting started experience. So we've been investing on making that easier to use. This includes better guides, more automation, and a clearer onboarding path for users getting started with the cloud service. If you haven't yet signed up, I encourage you to check out cloud.hashicorp.com. It's free to sign up for HCP, and you can kick the tires — and we'd love to hear your feedback on HCP Consul.

»Presentation Recap 

Now, we covered a lot of different topics today. As a quick recap, the way we think about service networking is ultimately about solving for four different pillars of challenge. That includes discovery and how our services route to one another — find one another — and masking that we have this highly dynamic infrastructure.

It's about securing it and governing access between the different services. It's about enabling automation so that as our application layer changes, the underlying network can be automated to support it. And it's about enabling access to API-driven services. 

To that end, we've announced support today as part of Consul 1.11 for a better and improved Kubernetes CLI, making it easier to deploy and manage Consul on a Kubernetes environment.

We announced native ECS integration, both ECS Fargate and ECS on EC2. So you can support those environments alongside your VM and Kubernetes environments. 

We introduced admin partitions as part of the enterprise product to make multi-tenancy easier for customers running large-scale Consul deployments. 

Consul Terraform Sync 0.4 was also introduced, which for the automation, enables integration with Terraform Cloud, as well as Terraform Enterprise.

Lastly, we introduced the new API Gateway for Consul. This is looking at access as a first-class capability within Consul as well. 

All of this is part of our effort to deliver this as open source as self-managed, and through HCP as a cloud-managed solution.

Consul 1.11 is now in beta. I recommend checking out the blog post. There are a lot more details about it there as well. And download it, kick the tires, and give us your feedback as well. 

If you're interested in learning more, there's a lot of content here. You can hear from some of our customers, like Stripe and Tide, and Workday, to learn what they are doing and how they are using Consul in their environments.

There are some talks by HashiCorp employees that are going deeper into the roadmap and how some of these capabilities work, and some of the use cases around Consul. I encourage you to check out all the different sessions and content that we have available. 

With that, I hope you enjoyed the rest of HashiConf Day 2. We have a ton of great talks. I'm excited for everything else that is to come. But for now, I'm going to hand it back to our great MCs. Thanks so much.

More resources like this one

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

1/20/2023FAQ

Introduction to Zero Trust Security

1/4/2023Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

12/31/2022Presentation

Canary Deployments with Consul Service Mesh on K8s