Skip to main content
Demo

Service Discovery With Consul on Kubernetes

Learn how to use Consul service discovery and Consul Connect service mesh capabilities in Kubernetes clusters.

Kubernetes and microservices have gained huge popularity over the years and this has brought on new challenges and problems to solve. This talk will go through the process of deploying Consul and Consul Connect in a Kubernetes cluster and explore how it can ease the discovery of new services and securely connect existing ones.

Transcript

Hello everyone. And thanks for joining the session on Consul. As the title shows, I'm going to be talking today about Service Discovery with Consul on Kubernetes—as well we’ll go into what Consul has to offer on the service mesh front with Connect.

I'm going to spend the first half of the session talking through these topics of service, discovery, and service mesh and how it ties into Kubernetes. I want to flip over after to a demo, and I'll show you how Consul is being used in my Kubernetes cluster. Let's get into it—I want to start by quickly showing this slide and level setting on these terms before diving deeper into the content.

The Buzz of Service Discovery and Service Mesh

If you've looked into Consul on your own already—whether it's through the HashiCorp website or watched a couple of past talks—you've likely seen a diagram similar to this. I think this diagram shows a brief overview of where we've been in the past with bare metal and virtual machines, and then moving over to containers.

To understand how these concepts work, we need to look into why we need them. On the left, we see an older or traditional way of deploying applications, whether it is directly on bare metal machines or using some sort of hypervisor like VMware.

With a lot of customers that I work with today, it is still the norm to deploy applications on servers—whether they’re virtual or physical—and then those get a fixed IP. Firewall rules are then put into place based on that IP and can take weeks to implement, to get two services to talk to each other.

As we shift to a more dynamic deployment model—as shown on the right with containers—we no longer can depend on a service having a particular IP address. Schedulers such as Kubernetes or Nomad can deploy and destroy services at any time for situations like node scale-up and node scale-down.

Due to this dynamic nature, we need an efficient alternative to reference these services and—on top of that—have the ability to control and monitor the communication between them. With that comes the idea of service discovery and service mesh—where we now focus on registering these new dynamic services to a central service registry and let that become our single source of truth.

Once we have the discovery piece nailed down, we can move on to the mesh side of things, which allows us to monitor the communication between our services and control that communication. These topics are the ones that I want to dive into today and demonstrate how they can all tie into Kubernetes.

Before we get into things any further, I want to quickly introduce myself. My name is Jacob Mammoliti, and I'm a consultant out of Toronto, Canada, working at Arctiq. My day-to-day is mostly focused on working with customers to enable them with HashiCorp tooling, specifically around Terraform, Vault and Consul.

On top of this, I spent a lot of time in the microservices space with customers as well­—working on all things Kubernetes and helping them migrate to a more containerized approach. You can find me on Twitter, and GitHub with that handle below my name.

The code I'm using in the coming up shortly will be available on my GitHub as well. We talked high-level on what service discovery is. Now I want to dive into it a bit more—and more specifically, Consul's approach to it.

Service Discovery with Consul

When we talk service discovery, we really mean, "How can I find instances of service A?” I no longer want to have to worry about connecting to a fixed IP, but rather reference a service by name and let a tool like Consul handle the destination instance for me.

In a containerized environment, the ideology becomes extremely important because the IPs assigned to an instance of a service are meant to be ephemeral. How does Consul do this? And how can we shift from hard coding IP addresses and achieve a more dynamic setup?

Consul keeps a centralized service registry that stores the location information such as IP address for Consul-enabled services. It does this by running a local sidecar agent on each node in the environment. The Consul client agent is responsible for registering new services that come up on the node to the service registry and also provide consistent health checks to ensure that we don't direct traffic to an unhealthy instance.

Looking at the diagram on the right, I should be able to delete one of the App A pods. That’s assuming we're in Kubernetes, and App A is a deployment—and Kubernetes will spin up a new pod with a new IP that the Consul client agent will register back to the service registry. All this is done automatically, and—at the end of the day—that service endpoint remains the same.

In the diagram, you'll also notice that I highlighted the Consul client agent—and the Consul client agents also expose an HTTP and DNS API so that services can reference it locally and attempt to help discover and communicate other Consul-enabled services.

Service Mesh with Consul Connect

Now that we've talked about the service discovery piece and Consul's approach to it, I want to dive into service mesh and how that is provided with Connect, which is Consul’s service mesh feature.

When we talk about service mesh, we are essentially talking about an infrastructure layer which sits above the network and below the application. This in-between layer paves the path to provide features such as service-to-service authentication, encryption using MTLS across the mesh, and observability into traffic patterns.

How does it do this? Well, Consul uses the concept of a proxy that is co-located with the application. In the instances of Kubernetes, it is running in the same pod. The benefit with the separation of duties is that they are not dependent on each other. Ops teams can come in and make changes to the proxy. Maybe they want to make a version update, without having to get the development team to rebuild the application.

So, spanning off from service mesh in general. Now, let's talk about how Connect enables it specifically.

Intentions

So the first point I'd like to discuss is intentions. Intentions in Consul provides a way to define which services can talk to which. This is very similar to anyone who's worked with Kubernetes’ network policy, where we can essentially define the same thing. But again, we're looking for visibility and control at the mesh level—and provide a single pane of glass to control all of these features.

When handing these types of things over to Consul, we're no longer locked into intentions at the Kubernetes level. We can expand the mesh to virtual machines and enforce the same policies all the same way—and again, this is all managed by Consul. Customers that I work with are never really on Kubernetes if they're trying to migrate to a more containerized approach. They often have a lot of applications still running on virtual machines—so something like this where I can expand a mesh to containerized applications and non-containerized applications appeals to them.

Communication over Mutual TLS

This is often thrown in the conversation when service mesh is brought up. This allows the ability to enforce that any services that communicate are doing so in a secure way.

Observability into the Mesh

I've hinted at this throughout the previous slides. When using Consul and Connect as your mesh, we get a consistent way to observe stuff like traffic patterns, latency, or error rates. All of this is done without having to ensure a development team has baked that stuff into their application. This goes back to that separation of duties that becomes a real thing with a co-located sidecar proxy.

I've also added a diagram on the right, showing an example dig command, which is querying Consul to discover any services of my app. When we get into the demo in a few minutes, we'll go through the same sort of exercise.

Tying This Together With Kubernetes

Since this talk is about Consul on Kubernetes, I want to talk a bit more about how HashiCorp has got Consul to fit so well in the ecosystem. Then go over a high level architecture diagram. It's important to know that Consul offers first-class support with Kubernetes by making it super simple to adopt Consul into the environment.

Easy Consul Installation with Helm Charts

The Helm chart provides a simple way to deploy Consul in a quick and repeatable fashion. And because it is a Helm chart, it can be even easier to plug Consul through something like Terraform or a CD application like Argo CD.

Syncing Kubernetes and Non-Kubernetes Services

Consul can also sync Kubernetes' native services into its own registry, which essentially gives us the ability to expand the service discovery scope and help connect more applications. We can also sync services outside of Kubernetes—and still manage it through that single pane of glass. Remember earlier I mentioned in most situations not everything is containerized—applications are still going to be running on virtual or physical machines that need attention as well.

Support for Auto-Injection

Another feature that comes with Consul and Kubernetes is the idea of auto-injection. We can tell our Consul deployment to auto-inject every namespace besides kube-system, which means anytime a pod comes up in this namespace, it automatically gets injected with a sidecar proxy. Of course, this auto-injection can be fine-tuned to specific namespaces and can also be controlled at the pod level or something at a higher level, such as a deployment or StatefulSet.

Multiple Clusters with Mesh Gateway

One of the big benefits with Consul as a service discovery and mesh tool is the ability to extend your mesh across multiple clusters. Customers, again, that I'm working with—that are now starting to adopt the containerized approach with Kubernetes—are deploying more than one Kubernetes cluster.

Something again that also appeals to them is being able to have multiple Kubernetes clusters deployed—but span that mesh across both clusters—or multiple clusters—using the same Consul service discovery registry, and Consul service mesh. This can be done with something like a mesh gateway.

Kubernetes Architecture within Consul

Building off the last slide and focusing on one cluster for now—this is a high-level architecture overview of Consul deployed in a three-node Kubernetes cluster.

Consul server is deployed as a StatefulSet, and in this architecture is deployed with three nodes. In general, your Consul servers should be about 3-5 nodes for high availability and to satisfy the raft consensus algorithm.

These server agents follow a leader-follower model and hold that state of your Consul cluster and contents such as the service registry. Underneath that, we have the Consul client agents, which we've talked about a lot previously. Those agents are deployed as a DaemonSet—a type of Kubernetes deployment, ensuring that each node runs a copy of the pod. And this makes sense because the agents are responsible for registering services that come up on the machine that it is running on.

Looking on the right, we see another two pods deployed with our applications. You’ll see that each application pod has an Envoy sidecar proxy deployed beside it. And again, these are responsible for enforcing anything from intentions or enforcing Mutual TLS between App A and B. No traffic is happening between the two applications directly. All communications go through the proxies, which again gives us full insight into that traffic at the Consul level.

Live Demo

Let's get into the demo. I have a GKE environment up and running, and now we're going to use that to deploy our Consul environment. After that, I'm going to deploy my application. Then we'll start to demonstrate how to query the Consul service registry, and then start looking at how we can set up and block communication—and allow communication with Consul intentions.

Let's get into the live demo. I'm going to show off that I have my three-node GKE cluster running. I've already created the Consul namespace here so we can get namespace to show Consul already exists. Now we can verify that there's nothing in that namespace.

We see Consul is now empty, so we can create the application and show off my values.yaml. In here, look that we have connect inject enabled true—but it's denied by default. We have to add that through annotations in our deployment.

Templating with Helm

Let's clear this. We're going to template out—using Helm—our Kubernetes deployment. Helm template writes out all of the Kubernetes YAML for you into standard OUS. We’re going to pipe it to a kubectl apply which is going to deploy Consul for us in our Kubernetes cluster. Give that a few seconds, and we should see everything get deployed.

We can now see stuff like surface accounts, config maps—and everything else related to Consul is now getting deployed in our Consul namespace. It looks like everything got created successfully. We can go ahead and clear this. We want to do a watch on get pods for Consul to make sure that everything comes up.

This watch will do a refresh every two seconds, and we'll wait for all of our server agents and client agents that come up successfully. We’re going to look for our server agents here—they will come up first and then followed by that will be the top three pods there—which are our clients. It usually takes about 40 seconds to complete end-to-end. It's pretty quick. And deploying Consul—like I mentioned earlier—is super straightforward to do in Kubernetes.

We see our servers who have come up, and now we're going to wait for our clients to come up as well. We see the first one's done, the second one and the third one should be done. It's done now as well. Awesome.

We can get out of this. What we want to get the IP address of the Consul DNS server, which will be a cluster IP in our Kubernetes cluster. The reason for this is because we want to set up a rule in our cluster saying that anytime someone tries to query a service that ends in .consul it’s redirected to that Consul DNS instead of using the Kube DNS that comes with Kubernetes. We're going get the output of a get service on the Consul DNS, and we should get back our internal IP. This is—again—the internal IP address of our Consul DNS server.

Editing a Config Map

We can now edit a config map, which I've already pre-populated. Open that up and we're going to replace the IP address that we have here with our new DNS IP. And again, we're creating a stub domain here to forward anything that ends in .consul to this DNS server. That's awesome. Now that we have that done, we can apply this config map and make those changes right away.

Let's clear this now. I want to look at our application that we're going to deploy. To keep things clear, we're going to create a Pokedex, which is a Golang application that I made that queries the Pokémon API. We're going to create a namespace to keep things clean. This is a standard Kubernetes deployment, but make note of the annotation where we have connect inject true, which means that any pod that gets deployed from this deployment will get a sidecar proxy attached to it.

On the next page here, we have a standard Kubernetes service. But the other thing to pay attention to is the pod that we're deploying next to it. This is going to be our client pod that also has a connect inject true. It's going to get a sidecar proxy as well. But it also has a service upstream of Pokedex on 8080, which means that this client is responsible to talk to Pokedex on port 8080. It's going to bind that port to localhost on this client pod—and we'll see that in a second when we go through the actual service mesh piece of the demo.

Application Deployment in a Kubernetes Cluster

Now we can clear this, and we deploy the application in our Kubernetes cluster. Once that's deployed, we can take a look in that namespace to verify that the pods have come up successfully. And it looks like they have—all three of three are running. And—again—now this is responsible for the actual application itself, and in that pod exists the sidecar proxy container as well.

Now that we have everything set up let's look at a sample dig command. We want to query the Consul service registry, so we're going to run a Kubernetes job that's responsible for digging against this service here. We should get back the IP address of the service pod that we've deployed. We can apply our job, and it's going to spin up a pod. And those piles of logs will show us the output of the dig command.

We can see it already completed. We can clear this. Let's get the ID again of that job that just ran. We can copy that out. We can do kubectl logs on that pod, and we're going to get the output of the dig command, the pod would've run. And in the answer section specifically, we see the IP address of the pod of our service.

Let's try something else—let's test Consul a bit here. We're going to up the replica count of our deployment. Instead of one pod, it's now going to be three. We edit the deployment and go to the replicas. This is going to deploy two additional pods of our application. So do a kubectl get pods—now we should see those two additional ones. They've all come up running, and they also have the sidecar proxy injected into them as well.

So now, let's delete that Kubernetes job that ran the dig command because we're going to run it again. And we should get back in three IPs in the answer section. We're going to apply the dig job, and we can do a get pods to get the new ID of that job. And then we can do a kubectl logs on that pod—and in the logs, we should now see that we have three IPs in the answer section, which we do. Perfect. Anytime we try to query that we get back three instances of our application.

Working on the Service Mesh

First, we want to get the public IP of our Consul UI. Because the service is a load balancer type, GKE is going to expose a public IP that we can copy and we can access from outside of the cluster. We can go ahead here—and then we're hit with the Consul UI.

So in the Consul UI, the first thing you hit when you upload it is a list of all the services that Consul is aware of. So we get the Consul service itself, we get our Pokedex service along with a sidecar proxy, and we also get our static client—everything is injected with the sidecar proxy here; that's why we see it in the list for Consul.

There's other stuff here like Arctiq—we can see that's our datacenter name. We can see the nodes here, of the nodes that are running Consul for us. Then you can see other stuff here, like key values and ACLs, but what we want to focus on right now—for service mesh—is intentions.

Creating an Intention

The first intention I want to create is a “deny all”. That’s going to mean is no services can talk to each other over the Consul service mesh. We choose the source to be *—for both source and destination, we choose deny. Then we're going to add a description here saying that all communications between the services on the Consul mesh are denied.

We can click save, and flip back over to the terminal. Now we're going to run an exec, which is a kubectl exec—as if we're running a command in that pod. We choose the static client because that's the one that can essentially talk about Pokedex service.

We're going to exec into that; we'll do a curl command, and curl it locally because—remember I said with the upstream—we're binding the 8080 port locally to this static client.

And now, we hit the endpoint of our Pokedex service, I'm going to choose charmander to get information on it. And we should get back an exit code 52, which means that the curl was blocked and essentially that traffic was blocked because of our deny rule.

Now, let's add an "allow" rule. We’re going to choose our static client as a source and Pokedex as our destination. We now click allow—and in the description, we basically say that the static client is allowed to talk to Pokedex.

With these two intentions in there, these are the only two communications that we're allowing over the mesh. We can see now that it's been added on top. It has a higher precedence because it's an "allow" and we can also filter by “allows”, “denies”. You can also see on the right that we can search intentions if we ever got a lot more than the two that we have.

We're going to flip back over to the terminal. We can run the exact same thing again. But because I'm feeling a bit more confident this is going to work now, we can pipe this out to jq—to output the JSON to make it look a bit more pretty.

So I hope this gave you a brief introduction of service discovery and service mesh with Consul and Kubernetes. Thank you for tuning into this session today. If you have any questions, please let me know below. And feel free to reach out, whether it's on GitHub, my Twitter, or by email as well. Thank you.

More resources like this one

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

1/20/2023FAQ

Introduction to Zero Trust Security

1/4/2023Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

12/31/2022Presentation

Canary Deployments with Consul Service Mesh on K8s