HashiCorp Waypoint Deep-Dive
Learn about the architecture, components, and the plugin model of Waypoint and see how it deploys software in this deep dive demo.
Speakers
- Evan PhoenixWaypoint Engineer, HashiCorp
Transcript
Welcome to HashiConf Digital 2020. Hi, my name is Evan Phoenix, principal engineer at HashiCorp, and today we're going to be taking an in-depth look at Waypoint and how it works.
Let's do a quick review of what Waypoint looks like from the outside. On the screen, we've got some output from running waypoint up
, deploying a simple app to Kubernetes. Up is composed of 3 separate phases:
Build
Deploy
Release
It's all configured with a simple configuration file, again with build, deploy, and release. Simply put, it helps take your application code and deploy it to a number of different platforms. The aim of Waypoint is to create a common workflow from integrating all of these various components.
Today in the deep dive we're going to be looking at 4 separate items:
Configuration language
Client/server architecture
The Waypoint entrypoint
The plugin architecture
Waypoint Configuration Language
It's HCL, as you might expect, which also means it supports JSON. It's project based, which means that there can be multiple applications specified within 1 project file. And because it's HCL, it has the ability to run functions inside the configuration, which adds significant power to the configuration language.
Let's take a look at those functions more in depth. Here we've got that same configuration file we looked at earlier, but we're going to focus on build. Let's get rid of the other pieces for a second.
One thing you'll notice is that we've forgotten to configure which Docker tag to use; we've just specified the image. Many deployment environments use git
references for tags, and we can do the same with just a simple function call. The Waypoint configuration file provides this top-level view of your project.
It's showing how the application is put together, from where the assets are stored, what platform is deployed to, and what strategy is used to release the individual deployments.
With this common interface, no matter which platform you're deploying to, it's going to look the same. If you're deploying to Kubernetes or Nomad, it's going to look exactly the same inside this configuration file, minus some differences for the individual platform.
And uniquely, as mentioned earlier, it has the ability to support multiple applications inside 1 of these project files, which extends the workflow to multiple applications simultaneously if you need to.
Let's have a quick look at what that looks like.
On screen, we've got that same application as before. Now let's bring in another application that's going to be handling the login for this app. This would be in that same file, just below it.
You can see all we've done is just specify another application here. They're using different deployment targets, and you can see that they just work together. Now that you can see this project more clearly, you can see that you have to deploy it together, and it's going to span those multiple platforms.
Client/Server Architecture
This is the Waypoint architecture as you're going to experience it today. You've got the client, which has a runner inside of it; the server; your deployment platforms; and the developer, don't forget him.
Let's break it down into the individual pieces.
The server is treated like a catalog. It will track all of the deployments, the artifacts, and the releases, and it provides what we call provenance for all of what's going on.
You can always come back and ask the server, "When was this particular git SHA
deployed? Did it deploy last night or this morning?" The server is the thing that can answer that question for you.
And it provides instance functionality, which we're going to talk a lot about later when we get into the entrypoint.
And lastly, it provides a UI. So you can go to the server, and the server will provide you with a nice browser UI for you to interact with, to find out what's going on.
Switching to the client, the client and the server talk via a standard gRPC interface. The CLI is built with public APIs so that anybody can build a client to talk to the server. In fact, the UI itself is just a gRPC client that talks to it.
Most importantly, the client is queuing jobs. What's important here is that the client is interacting with the APIs. And when it wants to do some big action links, for instance, to cause a build to happen, it asks the server to do that action via a queue.
In that way, the clients can be very thin; they don't have to be something that's going to be running all the work. They're able to say, "I need to have a build happen," and that can happen somewhere else: inside the runner.
These runners are driven by the server. They're taking those jobs off of the queue that is created by the client, and that allows this very flexible architecture.
They're also context-aware. A runner is running in some location that can be pre-configured with security credentials and in a specific place, so it can access stuff that maybe you couldn't access normally.
It's also automatic. When you're first getting started with Waypoint, setting up a runner can be cumbersome, so what we've done is have the CLI register itself as a runner to the server so that when the CLI says, "I want to do a build," the server will hand that CLI back its own job saying, "You can go ahead and do the build on your own."
Waypoint Entrypoint
We talked a little about this earlier with some runtime functionality, the server unlocks.
The entrypoint runs inside your deployments. It's a process that's running alongside your application, and it connects back to the server using the deployment configuration about the address and the credentials to connect back to the server.
Then it executes your application on your behalf. In that way, it has the ability to sit in front of your application to provide things like log management, which we're going to talk about in a sec; configuration; as well as general monitoring.
It's also in the production flow. Security is going to be important for a component like this, so let's talk a little bit about that.
Because the entrypoint sits within your application, we knew that security was really important for it. So the entrypoint only makes outbound connections; it doesn't listen on any ports. It's not a vector for accessing your app.
It also doesn't sit within the release URL path. Your app binds to a port, and the load balancer connects to that port to send the traffic. The entrypoint isn't involved in that connection flow. You don't have to worry about it dropping packets or requests in any way.
It also uses a capability-based token system to manage access to the server. The server will only have access to the APIs that are specific to the entrypoint. It doesn't have access to anything else.
It can't, for instance, look at the server list or anything else related to any other APIs that the server has other than just what is needed for the entrypoint.
As with all things security-related, you don't have to take my word for it. The entrypoint is fully open source. You can go in and inspect it to find out what's going on it and find out if you want to trust it or not.
It's also optional. If you want to use Waypoint without the entrypoint, you can go right ahead and do that. You'll miss out on some runtime functionality, but Waypoint will work just fine without it.
Let's take a closer look at what the entrypoint looks like inside a Docker image. On screen, we have a simple Docker image. You've got the entrypoint that's launched your application, and the entrypoint is talking to the Waypoint server, and your app is talking out to the internet. So your app, isn't talking through the entrypoint to the internet.
Another really great feature of the entrypoint is automatic URLs. These are implemented with a service that we call the Waypoint URL service.
When we were building Waypoint, we felt like it was important for all applications, no matter what their deployment platform was, to be able to have deployment-specific URLs.
This allows for a lot of workflows around verification of deployment before releasing, around testing applications, around developers being able to push something out and test it and look at it without having to push it out to production.
The way that we've achieved this is with the Waypoint URL service.
As you use it today, it's entirely powered by HashiCorp; we run it on your behalf. You can go ahead and use it today as you're running Waypoint.
Accessing this service is entirely automatic. The first time you install the server, it will talk to the URL service and get access to it. You'll be able to use it automatically.
It does have a few limitations. Right now it's HTTP-only. Other protocols such as Raw TCP will come in the future. It also only supports HTTPS on the ingress.
We have some limits on it to prevent abuse. There are request-per-second and bandwidth limits in effect.
That being said, it is fully open source. If you want a version that doesn't have any of these limitations, you're able to go out, download the source for the URL service, run it yourself, and have your Waypoint entrypoints connect to your own URL service. Then you can do whatever you want with it in that way.
How do you use and see these automatic URLs? We've already seen them. When we saw this before, it was right here at the bottom. That hostname was auto-generated by a service because one wasn't requested. And that --V1
indicates that this is the first deployment for that application.
If we were to do another deployment. We'd see that it's V2 and so on. In that way you can access previous deployments that are still running.
Live Debugging
Waypoint provides a live debugging environment inside an application that's currently running. It runs in the same context as that application, so you can do one-off tasks like run database migrations, look at asset files, maybe check to make sure that everything is working correctly.
All that is secured through the server. The server's managing all those exact requests to make sure that they're coming from the right clients.
Let's have a quick look at the flow around running a command.
In this case, we're going to run a simple date command inside our application.
The client says, "Wait for an exact date." That gets sent as an RPC request to the server, which in turn sends an RPC request to the Waypoint client.
Remember that that connection between the server and the entrypoint is via a connection that the entrypoint made back to the server. So while we're showing this coming from the server to the entrypoint, it's not a new TCP connection, but over something that already exists.
Then the entrypoint runs the date command, just outputs the 1 line, and sends that output back through the server and then back to the client. And when it closes again, it sends all that back through. So the client sees just a simple output as it came from the application,
Application Configuration
Let's look at another feature of the entrypoint called "application config." This is a simple feature, but it's really powerful.
As you are configuring your application, you can run waypoint config set
, and that will set variables. And those are all stored on the server And those variables are also per server, as well as per project.
Maybe you have a project that uses 2 applications. Maybe they share an S3 bucket to be able to access assets. You can configure that S3 bucket information at the project level, and then it's shared down to all the projects.
And all these configuration variables are provided as environment variables, so it's really easy for the applications to get access to them.
Waypoint Logs
Waypoint logs are another great feature of the entrypoint. The logs are stored on the server in a rolling window of just the latest logs.
This is a developer-focused feature. It's optimized for trying to help developers understand what is going on in their application right now. It's a live view of what is happening. It's not really meant to provide long-term log storage for your application.
That being said, it's compatible with other loggers. So as the entrypoint gets output from your application and logs, it also provides that log output on its own output. So it can easily be read and used by other tools and other log-capturing infrastructure.
Let's look a little deeper at what that log flow would look like. This looks similar to waypoint exec
, because it really is. In this particular case, the application has already started up and is connected to the server.
And the client has said, "I would like to get some logs." As soon as the application starts to log things, you can see that it sends that log output up to the server, and the server sees that there's a currently connected client that's interested in logs, and it sends them directly on.
You can see that it continues on with all future requests. In that way, you've got the logs streaming nicely from the application all the way up to the client, through the server.
Wrapping up the entrypoint, we see that it extends this workflow of building, deploying, and releasing to the runtime.
And that's great because it allows developers to really get in the flow of their application. Once it's in production, they're able to figure out what's going on and then edit it and then go back to the beginning to build, deploy, and release again.
These features are really focused on that day-to-day experience of what it is like to be a developer on an application. And it's still compatible with other solutions. We made sure that it wasn't going to block or do something strange so that other services couldn't access that same data.
You can easily use the entrypoint as well as a whole bunch of other tools that maybe also do logs or also do exec
without stepping on each other's toes.
Plugin Architecture
On screen, you see the same configuration file we saw before. We can think of this configuration file as a set of plugins that are communicating with each other.
The pack
plugin is talking to the docker
plugin for registry access, which in turn is talking to the kubernetes
plugin for deployment, which in turn is talking to the kubernetes
plugin again for release.
And there are values that are flowing through between all those plugins. But before we get into the values, let's talk about the component types for a second.
That first type is a builder. It has access to the application code. The idea here is, it's going to take your application and it's going to convert it into some kind of artifact that can be used by your deployment system.
In 0.1, as you're experiencing today, most of our builders are generating some kind of Docker image. Those are mostly, as you can see in the examples here, pack
as well as just a regular Docker image builder.
The registry is an optional component. The idea is that it will ship an artifact to a location that can be used by your deployment platform.
It's decoupled from build, because there can be a lot of different ways of achieving getting an artifact from a local machine or a build machine into an external or wherever you might want to put it. It captures all that complexity by having to be used as a separate plugin.
As an example, we have a Docker image on screen, but another good example that is not on screen is AWS Elastic Container Registry, which is a separate plugin that automatically authenticates with AWS to send the image up there.
Platform is probably the biggest plugin type, and the one that people will associate most with Waypoint, because it's the thing that's talking to your deployment platform. By default, this deploys the latest artifact given from the previous values that have been passed in.
The idea behind the platform is that it creates these standalone deployments without interfering with a previous deployment. The idea is trying to set up something that's new and standalone, and isn't going to cause an issue. The examples of this are pretty clear: Kubernetes, AWS ECS, Google Cloud Run.
Lastly, we have our release plugin types, and this one is also optional. The idea behind the release plugin type is that it's going to take one of those many deployments that were created previously, and it's going to figure out which one is the one to show right now.
If you've got an application, you've got some URL, you can only show 1 deployment at a time. It's the release plugin that helps you figure out which of those deployments you want to be showing at this particular moment.
One nice thing is it's reversible. If you release a deployment and you realize, "Nope, we forgot an asset" or it's broken for some reason, you can just tell the release to go back to the previous deployment, and it will do all the work to roll you back to that currently running deployment.
Again, the examples here are pretty easy: Kubernetes, using Kubernetes services, and AWS ALB, configuring the ALB to point to various target groups.
Today we've got this set of plugins:
Builder
- Docker image
- Buildpacks
- AWS AMI
- Files
Platform
- AWS EC2
- AWS ECS
- Azure Cloud Instances
- Docker
- Google Cloud Run
- Kubernetes
- Netlify
- Nomad
Release
- AWS ALB
- AWS ECS
- AWS AMI
- Google Cloud Run
- Kubernetes
Registry
- AWS ECR
- Docker
- Files
Building a Plugin
Let's take a quick look at what it's like to build a plugin.
It uses go-plugin
just like all the other HashiCorp tools. We provide a simple Go SDK that gives you all of the functionality to build one of those plugins. And it includes a rich UI of the UI components that you can use to display information to the user, such as animated spinners, terminal output tables, all those kinds of things.
Let's take a quick look at what it looks like for plugins to communicate with each other, because I think that's an important part of what makes Waypoint interesting.
On screen is some pseudocode showing the 4 components that we use in the example configuration that we've been saying so far. We can see a pack
builder here that's outputting a pack
image, and that's getting sent down to the registry plugin.
That's taking the pack
image in and outputting a docker
image.
The next one takes a docker
image into the deployment and outputs a kubernetes
release, and the last one takes a kubernetes
release and releases it.
Looking at that earlier slide with the types now, we can see those types flowing between components from pack
to docker
, from docker
to kubernetes
, and from kubernetes
onto kubernetes
again.
The plugins are implementing specific types, they're outputting those specific types, and those specific types allow the system plugins to be implemented much more simply, because without them we'd have to do some common-denominator data view that had to be passed between all the different component layers.
That would vastly complicate the components and not provide any benefit.
But with all of that rigidity, there are some issues. It lacks flexibility.
Waypoint Mappers
That's why Waypoint includes the concept of mappers.
Let's have a quick look to explain mappers. Let's look at what they look like in the real world.
We're back at this slide. Let's make some changes to make it look like what it looks like today.
Instead of having the registry take a pack
image and output a docker
image, I'm going to change to take what it really takes, which is a docker
image that would be a local reference and output another docker
image, which would be some reference in the registry.
But now we have a problem. The build plugin is outputting a pack
image type, but the registry is taking a docker
image type. This would not be compatible. You have an error of saying that the registry couldn't take in the output from the previous step.
That's where mappers get introduced. A mapper effectively slides itself right in between those 2. In this case, this is an image mapper that takes a pack
image and outputs a docker
image. We have eliminated that mismatch between the 2 by having a mapper change from one type to another.
Let's look at the same diagram as before, but now with the mapper. Now we've got pack
talking to the mapper, and the mapper changing so that it can talk back to the registry, and then on to kubernetes
.
Mappers add a lot of flexibility. If a set of components requires a mapper, as our previous one did, Waypoint automatically detects that and injects the mapper as needed.
You could do almost anything with mappers. You can have a mapper that, say, converted a docker
image to an AMI, or really anything that you need to take one type that a component uses and generate the type that another component needs as an input.
There can be plugins that are really only mappers, which is great for maintainability. You can have, say, a bunch of plugins that do one specific thing, and have a plugin that implements a bunch of mappers that give those existing plugins a whole new life, a whole new set of functionality.
In the future you'll even be able to declare mappers inside your Waypoint file to be able to do things like say, "I want a mapper in this particular way that does a security scan against my docker
image before it gets passed on to the next phase."
Wrapping Up
As we've seen, Waypoint unlocks a common workflow in 4 specific ways:
It gives you a configuration file that unifies the description of the application, no matter what you're deploying to the architecture.
The client/server runner model gives you a lot of flexibility in how you deploy Waypoint into your team's needs.
Those runtime services give you a great set of tools that you can use day-to-day for the developers. And it really helps extend the workflow from ending maybe at deployment or a release phase into the actual runtime phase, to help you figure out what changes need to be made.
The plugin architecture provides the foundation for the components of all types to work together. So as new deployment platforms arise, or people build all kinds of different things, you can easily plug those into Waypoint and it will continue to work just like it does today.
Thank you very much.