Skip to main content
Demo

Consul, Microservices, and Hybrid Cloud Migrations

See a demo of HashiCorp Consul deployment in a hybrid Kubernetes and VM environment and learn how it can help your migration to cloud microservices with legacy application support in mind.

»Transcript

Hi everyone, and welcome to HashiConf Global. I'm Elif, and today we will discuss about the approaches of deploying Consul in heterogeneous workloads, mixing virtual machines and containers. Especially in the context of many companies moving away from legacy applications to microservices. And of course, infrastructure needs to be a pillar within this movement.

Let me quickly introduce myself. I'm a DevOps engineer with experience in infrastructure administration, optimization and management. I have been working with HashiCorp tools and technologies for over three years now. Helping clients ramp up the transition from, let's say traditional infrastructure, to either hybrid infrastructures or even fully in the cloud. I'm quite happy to say also that I'm one of the core organizers of the HashiCorp user group in Bucharest. And nonetheless, I'm quite happy to collaborate with communities to drive technical excellence and have a cup of coffee with my peers.

Today's talk is structured as follows. Firstly, we'll take a look at the status quo in the industry. Next, we'll cover what Consul is from a bird's eye view. And then we'll deep dive into deployment strategies and the manner we are able to deploy Consul in hybrid environments. We'll explore in detail one of these aspects in the hands-on bit.

»The Rise of Cloud Microservices

Okay, so what's the current context in the IT industry? According to the most recent studies and analyzes, the global cloud microservices market is expected to rise at a considerable rate in the next few years. The surge in the volume of enterprise data, rising automation of business processes, and growing digitalization are the major drivers for the software market.

Microservices have been making waves for the past decade. But what are microservices? These are based on the development of separate programs providing services of a single application. Each of those can move and scale independently of other programs and they function in an ephemeral way. Moreover, the adoption of this paradigm has led to the rise of containers and container orchestration tools such as Kubernetes and HashiCorp Nomad. The game has changed, but "why" is the question.

Firstly, let's quickly take a look at DNS. At that point, — the traditional DNS-based service — discovery was no longer enough. In order to be able to continuously support the operation of services, developers and development teams needed more and more networking features in the areas of security, traffic control, reliability, and observability. Additionally, the number of applications deployed each day was increasing dramatically. It was getting really cumbersome to include all these capabilities within the application code.

Then Kubernetes appeared on the scene, and it did try to address some of these concerns. However, it still has some limitations, I could mention some of these. For example, Kubernetes does not check if a service is healthy before trying to communicate with it. It does not encrypt communication between services, and I would dare to say that it is not mature enough when it comes to scaling up or removing pods, and by that I mean scaling down.

This might be the moment when you say, "Oh, what is she talking about?" Is this presentation suitable for you? Well, I'd say yes because we are witnessing the transition and the adoption from classical infrastructure to other types of infrastructure alongside the manner applications are developing. I'm most certain that some of you have faced such issues. The question is what is the best approach to do so?

Because we want no service interruption, we want no data loss, and we still want to be able to deliver the best service to our clients. It's not a shift and lift effort. It should be a smooth transition. And this is where Consul comes onto the scene. And this is where we start discussing the beginning of service mesh in hybrid environments.

Following the industry trends, the developers of Consul a fully-fledged service mesh in 2018, and that is Consul Connect (also known as Consul service mesh). Consul Connect also enables a Kubernetes cluster to securely communicate with services outside itself. Connect enables communication between a sidecar proxy in Kubernetes in order to reach an API gateway surrounding standalone databases, virtual machines, and even across different clouds.

»Consul's Evolution

So since we are talking about Consul, the next natural question would be, what is Consul, and what does it do for us? Well Consul is a service networking solution. It is aimed at automating network configuration, discovering services, and enabling secure connectivity across any cloud or runtime. What's really interesting about Consul is the fact that it takes a different approach to networking, and how. It provides a CIDR service registry, it enables service discovery in order to allow services to register themselves, discover, and connect with to other directly. It enacts a service mesh solution in order to simplify networking by shifting naming abstraction, routing, authorization from central middleware to the very end point. It drives automation to eliminate the operational burden of updating networking middleware.

And let's not forget the fact that Consul is platform agnostic. This is what makes it a great fit for any environment, including legacy ones. Consul can be used alongside virtual machines, containers, and container orchestration tools. Keep this in mind, because we'll come back to this and actually see how this is done.

I was telling you before that Consul Connect was released in 2018. Let's quickly take a look how it has evolved ever since its birth up to this moment. Consul Connect was released in version 1.2 of Consul. Up to that point, Consul was primarily known as a service discovery tool. However, starting at that moment, it provided all the components of a fully-fledged service mesh such as automatic traffic encryption and easy communication rules by creating intentions. These allow or deny service communication.

Later in 2018, as you can say in the timeline, the support for Consul in relation to Kubernetes has been considerably improved by adding several capabilities. Such as:

  • Service catalog for syncing Kubernetes services to the Consul catalog and vice versa.

  • The official Consul Helm chart for running and configuring Consul on Kubernetes.

  • Kubernetes auto join, and many more.

Later in 2019, we noticed the release of the Kubernetes authentication method type. This is the moment when applications actually living in Kubernetes were able to authenticate to Consul natively. Next year, Consul continued to evolve and several features were added.

Okay, how does Consul work now? The core process of Consul is its agent. This can be run in either server mode or client mode. The servers are responsible for storing data about the services that are running, their configuration, health statuses, and much more. A production Consul deployment would typically run 3 or 5 Consul servers. However, for development purposes, as you'll see, one server should suffice. Multiple servers are run in order to ensure data persistence and high availability. Should one server go down, the remaining ones are still able to service requests, and very importantly, there is no data loss. In contrast, clients are responsible for detecting the health of services running on their nodes, and the health of other nodes in the datacenter. These clients send the information to the servers so that their catalog is kept up to date.

»Deployment Strategies / Progressive Delivery

We've seen what Consul is, we've seen how it works. Let's see how we start using it. Before deep diving into Consul deployment strategies, let's clarify what we mean by deployment strategies. In a nutshell, deployment strategies are a set of models and practices that enable the release and update of an application in a quick manner. Let's translate this to our topic today. This means that we are going to explore how to deploy a Consul datacenter in a hybrid environment. By hybrid environment, I mean a mix of virtual machines and Kubernetes. The virtual machine could be either on-premises, or in any cloud provider. Also, the Kubernetes service could be either on-premises, or one could use any managed service from any of the major cloud providers. So there would be four possible combinations.

The first case would be discussing when clients are running on non-Kubernetes nodes, and these join a cluster which sits on top of Kubernetes. In such a case, we need to ensure that the pod's IPs of the clients and service in Kubernetes are routable from the VM. And that the VM can access port 8301 for gossip, and port 8300 for RPC to those pod's IPs. Moreover, it may seem obvious, but I need to mention that unless we're using mesh gateways, we also need to ensure that the IP address spaces do not clash. That would be a major networking issue.

The second case would be when the Consul servers are running on top of VMs, and the clients are joining the datacenter within Kubernetes. In order to use an existing cluster to manage services in Kubernetes, clients can be deployed using the Helm Chart. This designs allows Consul tools, such as envconsul, or consul-template, and others to work with Kubernetes. We're running Consul as a pod inside of Kubernetes. The servers will automatically be configured with the appropriate address. However, when running Consul servers outside of the Kubernetes cluster and clients inside Kubernetes and its pods, there are additional networking requirements one needs to take into account. And therefore I would highly encourage you to check the official documentation before starting the design.

The third and last case would be the time when there is a single Consul datacenter that spins across multiple Kubernetes clusters. There would be one cluster which has both servers and clients, and the others would be joining only as clients to the datacenter. In order to accomplish this task, I've broken it down into different steps. The first one would be the actual setup of the Consul server using an Ansible role. The second side would consist of performing certain actions on the Kubernetes cluster so that this is able to authenticate with the server and use the Kubernetes authentication type I was mentioning before.

For the first step, I've chosen Ansible in order to install and configure the Consul server because automation makes sense in terms of Consul deployment strategies when configuration management is involved. As you already know, Ansible is an open source tool that automates provisioning, configuration management, application deployment, and many other manual processes. One key benefit that Ansible provides is that its modules are independent. This means that we are interested in the end state, and no matter how times we are actually performing those actions, the result will always be the same.

»Demo: L7 Traffic Management

How is this done? Let's quickly get it. In order to install and configure Consul, we need to perform certain actions on each of the nodes. The first one would be we need to determine which Linux distribution we are using. Second, we need to add the repository in order to be able to download the Consul binary. In order to install and configure Consul, we need to perform certain actions on each of the nodes. The first one would be determining which Linux distribution we are using. The next step would be to add the official repository in order to be able to download the Consul binary. Thirdly, we need to add the Consul user group and create all the associate directories. We also need to copy the configuration file we have customly defined, and at the end, start enabling the Consul service.

Imagine I'm using a single server. What would happen when we would need to configure a hundred, or even a thousand servers? This is the magic of Ansible — when everything is beautifully automated I would dare to say. The next step is to take a look at what happens on the Kubernetes side. In order to set up the Kubernetes cluster, we'll perform the following actions. The first one would be setting a Kubernetes service accounts and assigning certain privileges within the cluster for this so that Consul is able to authenticate. Next would be setting up the Kubernetes authentication method, installing the agent and connect inject, and in the end, confirming everything is working as expected.

Let's break it down. Firstly, we need to create a service account and assign certain cluster role privileges, and of course create some cluster role bindings for Consul. Look at the cut snippet, you'll notice something interesting. And I also need to mention that I've used Kubernetes 1.24 for this demonstration. And what you need to take into account when using this version is that starting with Kubernetes 1.24, service account token secrets are no longer automatically created. This means that we need to manually create a secret, and the token key in the data field will be automatically set for us. So be aware of this small detail.

Next we are creating an authentication method. There are two prerequisites for this step. The first one would be the fact that we need three pieces of information exported from the Kubernetes cluster. As you see, I'm using the JBD token, and also the CA certificate, which I've extracted from the Kubernetes cluster. And as you see also, there is a Consul ACL, this means that for the Consul server, I have already enabled ACL. In case you don't use ACLs, which is discouraged in my view, pay attention to this.

We are almost there actually. The picture looks nice. However, before that we also need to install the Consul agent and connect inject using the official Helm Chart. In this case, Consul will run as a DaemonSet. This ensures that there is a Pod on each of the nodes comprising the cluster. Connect inject on the other hand, will be a single pod deployment.

And this is how everything looks. As you can see, the Consul server has a public IP address. Well, the name is not that happy. However, notice that on the side of the Kubernetes nodes there are some IP addresses which are private.

Okay. We now have a Consul server sitting on a VM, and a Kubernetes cluster of 4 nodes that has joined this datacenter as a client. This is an example of using Consul in a hybrid environment. I have chosen this deployment strategy out of the three because usually companies and large clients are transitioning from an infrastructure of VMs to microservices sitting on cloud or sitting on Kubernetes mainly. Regardless of where Kubernetes is. This is why I think this example might suit many of us.

What's next? We have a datacenter. We have Consul joining the cluster. What should we do next? Well, Consul service mesh offers a flexible and comprehensive set of service discovery and traffic management features at L7. The service discovery process can be thought of as a discovery chain, which passes through three distinct stages: routing, splitting, and resolution.

Routing would be the first stage of the L7 traffic management pipeline. This allows the interception of traffic using L7 criteria such as path prefixes, or HTTP headers. The next stage is the service splitter configuration, this allows the splitting of incoming requests across different subnets of a single service or across different services. And the final stage is of course is the service level solution configurations. This allows for defining which instances of a service should satisfy discovery requests for the provided name.

Well, I would sadly say that this is the end, but we are not quite there. We have discussed today how the industry is shifting from traditional infrastructure types, and how infrastructure is supporting development migration into adopting a microservices oriented paradigm. Do not underestimate this effort, because it's quite real. Infrastructure people and development people need to work together. I've chosen Consul as an example in aiding this transition in order to ensure a smooth migration. We looked at what Consul is, how Consul has evolved in the past years, what problems Consul is trying to address, and how we are able to leverage all its capabilities. And stay tuned, I'm quite sure Consul will evolve nicely. Thank you everyone. I hope you enjoyed this presentation, and thank you for having me at HashiConf.

More resources like this one

3/15/2023Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

1/20/2023FAQ

Introduction to Zero Trust Security

1/4/2023Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

12/31/2022Presentation

Canary Deployments with Consul Service Mesh on K8s