Skip to main content
Case Study

SeatGeek and the HashiStack: A Tooling and Automation Love Story

At SeatGeek, 5 engineers manage 200+ Nomad jobs and 2500+ Nomad allocations running on ~200 EC2 instances while servicing operations for over 100 other engineers. See how they do it with the help of the HashiStack.

Andrei Burd, senior infrastructure engineer at SeatGeek, has gradually adopted all of the HashiCorp tool stack starting with Vagrant in 2013 and concluding with Nomad in 2016. Today, SeatGeek uses many components of the HashiStack to provide tickets to millions of customers.

This session will focus on the open and closed source tools, automation, and self-service options SeatGeek exposes to the rest of their organization so they can keep up with over 100 engineers and millions of users using their platform 24/7.

Nomad, Consul, and Vault are at the heart of their infrastructure. Close to 100% of their infrastructure is fully Nomad-managed. Nomad is their real init system. Currently, SeatGeek has 200+ Nomad jobs and 2500+ Nomad allocations running on ~200 EC2 instances in production, and it's all managed by a team of 5 engineers. They'll show how these clusters look and they'll run down their list of tools, the problems they solve, and how they use them.

Speakers

Transcript

Hi, everyone. I’m Andrei. Nice meeting all of you.

“SeatGeek and HashiStack: A Tooling and Automation Love Story.” Well, from love to hate is a really close dimension

I’m senior infrastructure engineer in SeatGeek. I’m co-organizer of HashiCorp User Group Tel Aviv. devopsdays Tel Aviv—we’ve got a really awesome community in Israel right now, and one of the projects that we’re running is OpsSchool. It’s when we take people who are stuck a bit in their sysadmin or QA life and want to go to DevOps, and we just make them know all this stuff, a lot of HashiCorp stuff. And then we’ve got new DevOps engineers.

Let’s talk about SeatGeek. SeatGeek is a ticketing company, a really big inventory hub of all the tickets in the US and in the world. We can help you save money on tickets. And as you can see on this awesome animation, you can see where you will be seated in the stadium and other places.

Stepping into the HashiStack with Vagrant and Consul

Let’s start with my adoption timeline of the HashiStack, because today HashiCorp has got a lot of products. When you go to their website, normally they show Terraform, Nomad, Consul, and Vault. But there were other small things. I started using [Vagrant] https://www.vagrantup.com/ “HashiCorp Vagrant”) back in 2014, and it was really nice. Then when I worked in my previous company, Yotpo, we said, “OK, now we have to go multi-region and service discovery and microservices and everything. Let’s deploy Consul.”

We deployed Consul in all our fleet. It was 27 instances in 2 datacenters. And it didn’t do anything for 9 months. We just had Consul running with services, and we didn’t use it. But then we started to work with our compliance and other stuff. And we started to work with dynamic credentials and passwords for the database. How did we implement it?

Fortunately for us, in 2016 we got Vault. We can just wrap a little slackbot around it and have our credentials with TTL and with the audit log and with everything we need. Then, good stuff came again to our shop. We started to pack the AMIs (Amazon Machine Images) with Packer and use Terraform to provision our outer scaling groups.

And what else do we need to add for it? Nomad. It was 2016, Nomad was 0.5. But after making a couple of PoCs of Docker Swarm, of Kubernetes and all the other stuff, we decided, “OK, Nomad is our choice.” And I’m still really happy we’ve done it, even though I left the company.

The timeline

  • Vagrant, 2013

  • Consul, August 2015

  • Vault, August 2016

  • Packer, September 2016

  • Terraform, September 2016

  • Nomad, November 2016

Helping the Hashi community

Let’s talk about the guy who inspired this talk and who should have given this talk. His name is Christian Winther. He’s my team lead right now. He’s infrastructure development manager at SeatGeek. Most of the community knows him by his awesome open-source work because he’s author of hashi-helper, nomad-helper, nomad-firehose, and it’s written hashi-ui.

Let’s talk about hashi-ui, now, because the history is not so linear. It was born in 2016 when most of us started to use Nomad, when it was 0.2, 0.3. The problem with Nomad was it was an awesome scheduler, but it had no UI. So it was really hard to understand what you had inside, what’s running, what’s not running, how can you do something.

And then an awesome guy named Ivo Verberk started to make nomad-ui with React frontend and with Go backend. That became hashi-ui, and that showed us the status of servers, the status of notes, the status of jobs, all the things you wanted to know about Nomad.

But as you know with the open-source world, you start to work on something, and then your company is not interested in it. Or yourself, or anyone else. Christian took over, when Ivo stopped working on it. Christian adopted it, and took full custody of hashi-ui. And it works. I am an active contributor to it, too (with 1 commit).

Looking inside the UI

In the UI, we can now see the Nomad and Consul parts. The first thing you can see are multiple regions. When you go inside, you can pick the region, and take the one that you need. In Consul, by default, we can see our services.

Right now this UI is looking awful because we don’t have the awesome UI guys that HashiCorp has. And it was written in 2016 when Consul UI was static, not updated. And we wanted new features. And this is how it works. There is filtering. It can automatically update itself with the adjunct requests and all this stuff. It was really nice.

The one style that we were requesting for years in Nomad Consul UI and never received is a deregister button. Normally people don’t need to deregister their services, but if you need it, we got a button to do it. And a lot of other stuff like multiple checks can be seen here. It’s pretty useful.

Nomad received its UI, let’s say in 0.7, and before it we were really blind. But hashi-ui gave us the option to see what’s happening with our cluster, all the CPU usage, RAM usage, how many jobs we have, and all that stuff. It’s super useful. I adopted it before I joined SeatGeek, and it helped us to just move with it.

Sorting services

They can be sorted by type—service, batch, system—you can see only things you want. You can see the allocation status. You can see the priority, all this stuff.

Before yesterday’s keynote, and 2 years ago, this was our only option to see what’s happening inside the cluster.

When we go to our Nomad clients, we can see the same things. We can filter here by name, by status. If it’s up, if it’s down, should we drain it? What is the class? What is the version? We can see the CPU usage of every client. It’s really useful stuff when you have to run more than a 5-node cluster.

Another really nice feature was when you’re clicking on one of the clients, you can see all the information, all the comments we use, Nomad node status, it shows you everything. You can see everything in the UI. And you can even drain or disable the scheduling collectability for this node.

And if you want to check something: How was your new autoscaling group? Does Vault have the right version? What are the meta properties? This is super nice. We will talk about how we use all these meta properties when we talk about nomad-helper. But it’s just a really nice UI that can show us general info. We have stats about CPU and RAM usage. We can see all the allocations running on this node. It’s everything that you can see in the UI, but in your browser, colorful, automatically updated.

Helper tools

CLI tools are not so good looking, not so fancy, but they give us the ability to do a lot of really good stuff. So hashi-helper was previously known as vault-helper, and it’s really fun like all these internal tools. SeatGeek open sourced hashi-helper. It’s a disaster recovery and configuration manager for Consul and Vault. Why did we do this if we’ve got Terraform that can provision? You have Consul? Key-value for your Vault secrets?

If you tried to configure Vault with Terraform a year ago or 2 years ago, sometimes when you upgrade Vault clusters, you got fun issues like it’s just crashing. And then you have to erase all your state and make Terraform apply to your Vault cluster again from the beginning. And this is not a nice option.

And if you’ve got change management and all that other stuff, it’s not automatic. So Christian wrote hashi-helper, that generally wrapped all this, writing to the Consul and to Vault generic secrets with the routine calls. It gives us the ability to push all. It means that both in Vault and in Consul, we can do Vault Unseal, and it’s super useful when you don’t use automatic configuring. We got our unseal keys encrypted with GPG, with Keybase.

When you have to unseal Vault

When we rotate our Hashi boxes that run Vault, maybe for a new version of Vault or a bigger instance, we don’t have to go and find these instances one by one. We can just do hashi-helper-unseal.

This is my decrypted, unseal key from the Keybase. It will go to your Consul, and it will find all the Vaults that are sealed, and will send the unseal to them. It works really nice. And if you have more than a couple of environments in every region, it helps you a lot, because when you start to promote new versions across it, you have to do a lot of unseals. Then we have vault-push, vault- other stuff, yadda yadda. Configuration looks like this, on this slide. You’re just saying, ”OK, here’s my environment. You can use whatever.”

Let’s say this is “application name must match the filename.” We can add the policy, and we can add the secrets. It’s like the same HCL writing to Vault. But because it’s not Terraform, we have no state. We have nothing.

The good thing about Vault is that sometimes, you just write things there. So you can override them. And if you’re doing GitOps, and let’s say that everything is in your configuration, you don’t have to maintain the state. You just write against the same configuration and it works.

In Terraform we had a couple of problems when we upgraded Vault or Terraform. And because the Vault provider was really actively developed in the last year, we had a lot of crashes. This is why we use it.

Managing profiles

One of my favorite things that we’ve done lately is profile-use. I think most of you who run Nomad, Vault, and Consul, you are not running only one cluster, right? You’ve got your production, your staging, different regions, and you have to log in.

Even if your Nomad and Consul don’t have ACL enabled, you will have to log into Vault. And no one wants to use static tokens. No one uses the root token. All of us want to use, let’s say, github auth. Because GitHub gives us a really nice ability to log into Vault, receive our token, and use it.

The problem is, if you have a lot of environments, and you have to log into production and then to staging and then to development, and then go back to production and then, and then, and then—Vault can have a lot of profiles. You have in your home directory .vault-token. And there is only one token that’s saved there.

Every time you change the environment, you will log in. And I really don’t like to generate tokens again, again, and again. When you still have the token that you generated 5 minutes ago, it’s still valid. Let’s maybe cache it and use it. What we’ve done in hashi-helper, you can have a profile for all your environments.

You can see here, I’ve got profile 1 and profile 2. In profile 1, I’ve got a Vault server that is living here. For authenticating there, I have to add my Vault token, and this can be your static one. For example, I need my second profile, where the auth method will be GitHub. And I can provide my GitHub token. And if my Consul and Nomad have ACL enabled too, I can tell Consul that my auth method is Vault, and my credentials will be on consul/creds/administrator. And the same with Nomad.

If you want to go to Nomad, just tell it, “Talk to Vault and bring me credentials from here.” This binary uses Keybase to encrypt and decrypt this configuration file, because obviously you will need to store your NetApp doc in here, and it’s sensitive data. And what’s happening when I do hashi-helper-profile-use-us-production, it will decrypt my profiles file. Then it will check my cache, and it will print for me Vault address, Vault token, Consul address.

Here we don’t have authorization to Consul. But for Nomad, it will be a token that I received from this Vault. The good stuff is the tokens are reused for as long as they’re alive. And the cache and profiles file, they’re encrypted with your Keybase, so it’s safe. It’s just good stuff.

The next CLI: nomad-helper

Let’s say you can log into your various Nomad clusters, but Nomad is an actively developed tool, and we have to update it from time to time. Now we’ve got 0.9.3; before it was 0.9.2. How can we deploy and ensure that our old workload moved to the new workload, and we don’t have any problems with it?

nomad-helper is the binary that we developed to help us run things that are not in the original binary of Nomad, Consul, and Vault. nomad-helper gives us a couple of options that are now available in the latest Nomad binaries, like nomad-helper-attach, because when you have to debug something, you have to go to the Nomad client, find the allocation of the machine, and then do docker exec and all this stuff. nomad-helper can do this for you.

You can just do nomad-helper-attach-allocation-id, and you will be inside the container. You can force GC (garbage collection), because with a normal binary, you don’t have a CLI command.

Really nice is, if you have to copy your environments one-to-one, you’ve got production and you have to run staging and then staging No. 2 and then staging for this client—you know how it works—and you have to copy all your environment job-wise and all this stuff, you can do nomad-scale. It’s not so intuitive, but it gives you the option to export all the jobs that are running and then import them. It will be a huge HCL file, and you can just clone your whole cluster running in Nomad.

The tale is the same to see the logs.

Visualizing the Nomad cluster with stats

The really good stuff is stats. How good can you visualize your Nomad cluster? So you understand what is the version of the AMI or any other stuff, or the count, eligibility, and what it can do. So nomad-helper stats can have any dimension that is in any metadata field, and make for you some kind of table that will group your instances by what you want.

Here we do “dimension,” “AWS AMI version,” because for our deployment pipeline, we’ve got AMI with versioning. Here it will be 3.5.0, the last one. And as you can see, we have a couple of outdated AMIs still running in the cluster. We want to know their class, and are they eligible for the scheduling?

Here it will show us all the nodes in the cluster and all the agents. This is super useful when you deploy the new Nomad version. It will show you if everything is already moved to the new version or not, and what you want to do. Because when you deploy the new version, the thing you want may be instance type, may be what is normal, what is a spot instance. All this stuff is super useful.

Easier instance deployment

The best stuff is for when you deploy a new version and you want to disable the eligibility of or drain old machines. Let’s say we have a propagation lifecycle so that when we deploy new instances, we want to shift only 50% of our workload from all the instances to new instances, to make it like a canary deploy. What we can do is nomad-helper-node-filter-class. So we tell it only for the class hashiconf-eu-2019, with a meta that has an AMI version like it, take only 50% of the machines and make them eligibility-disabled.

This is super useful when we deploy new instances, and we want to test something new in production. This really helps.

Deploying with firehose

What sometimes is lacking in Nomad is the visibility of history. Like, “What happened with my allocation? What happened with my job? What happened with my client?”

For this, we have a tool named nomad-firehose. What nomad-firehose does is just take care of Nomad. It does HTTP long polling on the status API. You can run it with different parameters: job client, allocation, and other stuff. You add to the queue everything that happens: You’re making a deploy, or something was rescheduled, or Docker was killed because of ‘out of memory’. Anything that happens in your cluster will go inside this queue.

And then you can edit your database and make alerts on it. Or if you want, just search it afterwards as slugs. You can add this to Elasticsearch, Graylog, whatever you need. Because sometimes you will have angry developers saying, “Hey, I got this application running. What happened? Why did it die?” This is really nice.

Protection for Redis instances

Another open-source tool with really fun history is called ReSeC. The name ReSeC comes from the capital letters in Redis Service Consul. There is always a story behind every open-source tool. On a really nice day in Tel Aviv in 2017 or so, we had a DevOpsDays conference. I was sitting in the booth saying, “Yeah, we’re hiring, just come join us.”

There is a video about how this conference was fun. It was not fun. Because we are running data in AWS. And AWS sometimes makes chaos-monkey engineering for you.

One of our Redis instances just blew up. And of course it was a Redis instance, it was not only cache. It had our inside jobs processing that were really critical, and we had a cascading failure because of this. So I missed the end of the conference.

But 2 weeks after this, on YouTube, I saw an awesome video by Preetha Appan, talking at HashiConf 2017, about how to deploy your review delivery service or something like that. She said if you only need one Redis running, you can leverage Consul locks. Say you have 2 instances, 1 running Redis, and 1 waiting for Consul to release the lock and run Redis for you. I said, “Let’s do the same.” This Consul lock will give us master-slave elections in our Redis cluster. This is how ReSeC was born. When we run it, it just starts. It’s testing if Redis is alive, testing if it’s master is in Consul, makes an election, acquires the Consul lock, and then registers master in the Consul.

Now we have Redis master registered in Consul, and it’s running. And all the other registers that are running, they will automatically be pointed as a slave for this Redis. And when the master dies, the lock in Consul will be released. There will be an election, and one of the Redises that already has the data will be elected master. Everyone will change to be a slave of the new master.

For our application, it’s the same master Redis Service Consul, and you can use it. We had no data loss after that.

Open source is super nice, because this project made SeatGeek hire me. Because I had been talking to Christian, and I was using hashi-ui. He started to use ReSeC. This is why everyone who has not contributed—guys, you have to do it. It will do good for the community, for you, for everyone. Let’s do it.

Libra for scaling

Another thing from the community is Libra—but not a Facebook currency. It was a really nice project, though really short-lived. It was written by a guy named Ben, an intern at Under Armour 2 years ago. And he wrote autoscaler for Nomad.

It was not ideal but we adopted it. We added the logic that we needed for it. It’s something that we run in production that can scale to how many jobs you want. For now it scales like 200 jobs based on InfluxDB data, and everyone is really happy. It scales our job processing based on queue size.

If in your queue you have 0 messages, you will have 1 worker. If you have 1,000, you can scale to 1,000 workers. It’s really neat.

This slide shows code on how can you use it. This is for a job of nginx-prod, showing group, minimum, and maximum. This is based on AWS CloudWatch metrics. Works really nice. Rules to scale up, and rules to scale down.

That’s it. I’m Andrei, I’m from SeatGeek. We are hiring. Thank you very much for having me here.

More resources like this one

2/3/2023Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

1/5/2023Case Study

How Discover Manages 2000+ Terraform Enterprise Workspaces

12/22/2022Case Study

Architecting Geo-Distributed Mobile Edge Applications with Consul

zero-trust
12/13/2022White Paper

A Field Guide to Zero Trust Security in the Public Sector