Introducing Nomad 0.9
Nomad 0.9 features pluggable execution drivers, specialized hardware support, affinities, spreads, and preemption. HashiCorp co-founder and CTO Armon Dadgar shares the details for these highlights.
Speakers
- Armon DadgarCo-founder & CTO, HashiCorp
Transcript
We're excited to announce the availability of Nomad 0.9, which is the newest version of Nomad. What we're adding is first-class ability to extend Nomad to support new plugins and new functionality that are brought in through external plugins. What we focused on is three different classes of plugins to begin with.
Plugin system for new task drivers
One is the task drivers or the execution engines of Nomad themselves. How do we start to plug in alternative containerization technology? How do we plug in alternate virtualization technology? What we tried to do is instead of just saying there's the limited set of integrated native providers that we ship with, we'll make that a pluggable interface so our users can find new interesting, novel plugins and methods to extend Nomad as task drivers.
We're already starting to see some very interesting ones like native Node.js capabilities where an application developer can just launch a Node.js application and have that containerized transparently. We're excited about what we think that's going to open the door to in terms of new ways of consuming Nomad.
Support for different hardware types
The other thing we've been focusing on is how to make use of different types of hardware. So, along with the plugins supporting different execution drivers, we also now support different devices being plugged in. This includes things like FPGAs, GPU devices, and specialized networking equipment.
Our goal is to make Nomad as general as possible for scheduling purposes so that if I'm using Nomad to schedule across a large GPU farm, I can schedule my job and say, "I specifically need this model of Nvidia device" or "I need a certain amount of memory on my GPU to be available for my job to function." So now I can expose these devices as plugins to Nomad, and then within my job specification, I can constrain my job to only running on machines that have that device available. This makes Nomad much more extensible as well as allowing GPUs to be natively consumed and managed by the scheduler.
Affinities
The other big changes with Nomad are enhancements to the scheduler itself, particularly the addition of affinities. This lets us say that one application should be co-located near another application—or an anti-affinity: that an application should not be co-located with another application.
Specifying spreads
Nomad 0.9 also adds the ability to specify a spread, and when we specify a spread, we specifically do not let an application to be bin packed, meaning we don't want to fit as many instances as possible on a machine; we want to peanut butter it across multiple machines and spread it out. This is useful for things like availability or reducing the load on a server, particularly in private data centers where we're worried about power consumption.
Native preemption capability
The other feature that was added was preemption. So, on heavily loaded clusters a risk you run is priority inversion, meaning if I have a lot of low-priority work consuming my whole cluster and a high-priority job shows up, Nomad had nothing it could do other than wait until the low-priority work finished, and in this sense, you had an inversion of priority. The high-priority work was being forced to wait on low-priority work.
With native preemption capability, what Nomad can do is evict the low-priority work. So, when the high-priority job shows up, it will eject the low-priority work that's running to make space to execute the high-priority work, and when the high-priority work finishes, then the low-priority work can continue to be scheduled, and in this way we get a natural correct priority ordering in terms of what workloads are allowed to run first.
All of these are available as a part of Nomad 0.9, which we're excited to share with the community.