HashiCorp Nomad 0.5
We are pleased to announce the release of Nomad 0.5. Nomad is a distributed, scalable, and highly available cluster manager and scheduler designed for both microservice and batch workloads.
Nomad 0.5 includes a number of new features focused on increasing cluster security and enabling new workloads to be run on Nomad. Highlights include:
» Vault Integration
Nomad's integration with HashiCorp's Vault gives jobs a simple, declarative syntax for retrieving Vault tokens.
Nomad tasks are annotated with the set of Vault policies that are needed. When the job is submitted, Nomad will optionally validate that the submitting user has access to the requested policies. Once validated, Nomad schedules tasks across the cluster and through careful coordination between Nomad servers, clients, and Vault, a unique Vault token is generated for every instance of the task while never exposing the token to Nomad servers.
The following example shows all that is needed for a job to request a Vault token with access to multiple policies:
task "api-server" {
# ...
vault {
policies = ["user-db-read", "payments-api"]
}
# ...
}
The Vault token is made available to the task via the canonical VAULT_TOKEN
environment variable, and is also written to a file in a new secrets/
directory.
The secrets/
directory is backed by an in-memory filesystem on supported operating systems and provides a convenient location to write secret data that should not be persisted past the life of the task.
» Template Block
Nomad 0.5 introduces a new template
block that provides a convenient way to include configuration files that are populated from Consul data, Vault secrets, or just general configurations within a Nomad task.
The template
block is powered by the popular Consul Template tool. As such, configurations managed by Nomad can be updated dynamically. To make handling configuration changes as easy as possible, Nomad can signal or restart a task when templates are rewritten.
If the task requests access to a set of Vault policies, the Vault token created for the task is made seamlessly available to the template block. This makes reading secrets from Vault incredibly simple.
The below example shows a template rendering a configuration that is populated with data from both Consul and Vault. The template could also be stored outside of the job file and downloaded separately.
task "example" {
# ...
template {
data = <<END
{
"log_level": "{{key "service/geo-api/log-verbosity"}}",
"api_key": "{{with secret "secret/geo-api-key"}}{{.Data.key}}{{end}}"
}
END
destination = "local/config.json"
}
# ...
}
This simple block would materialize a configuration similar to the below and continue watching for changes to the data in both Consul and Vault.
$ cat local/config.json
{
"log_level": "TRACE",
"api_key": "2f11c4a6-a15c-11e6-80f5-76304dec7eb7"
}
» Sticky Volumes
Nomad 0.5 introduces a new option for persisting data between versions of a task called sticky volumes.
Nomad provides all tasks an ephemeral location to write data. Historically, when a replacement task is created because the user submitted an update job or the node is being drained, the data written by the original task is lost.
With the newly introduced sticky
parameter, Nomad will make a best effort scheduling attempt to place the updated task onto the original machine and re-attach the ephemeral volumes used by the previous task.
If Nomad fails to place the task on the original node it can optionally do a remote migration of the data. This is controlled with the migrate
parameter. Sticky volumes does not provide any guarantee that the data will not be lost as it is not replicated between hosts. It however provides a useful tool for applications that can handle data lost but would prefer not to, such as Cassandra or HDFS.
The below gives an example of how to both enable sticky volumes and remote migrations:
group "sticky" {
# ...
ephemeral_disk {
# Requst 2 GB of storage
size = 2048
# Enable sticky for the ephemeral volume
sticky = true
# Enable migrations for the case Nomad could not place the updated
# allocation on the same node
migrate = true
}
# ...
}
» Cluster Encryption
Nomad 0.5 brings the ability to encrypt all of its network traffic. There are two separate encryption systems: one for gossip traffic and for both RPC and HTTP communications.
Gossip traffic is encrypted using symmetric key encryption between Nomad servers and TLS is used to secure all other communication. See the Nomad Agent's Encryption page for more details.
» Roadmap
Nomad 0.5 is a big release that adds lots of new features, improvements, stability, and bug fixes. As a result, we expect that there will be some new issues which will be addressed in point releases following.
Features that are currently planned for the next major release of Nomad are:
- Support for rolling job updates using Consul health checks.
- Support for life-cycle hooks on tasks.
Until then, we hope you enjoy Nomad 0.5 as much as we do! If you experience any issues, please report them on GitHub.
Sign up for the latest HashiCorp news
More blog posts like this one
Nomad 1.9 adds NVIDIA MIG support, golden job versions, and more
HashiCorp Nomad 1.9 introduces NVIDIA multi-instance GPU support, NUMA and quotas for devices, exec2 GA, and golden job versions.
Terraform, Packer, Nomad, and Waypoint updates help scale ILM at HashiConf 2024
New Infrastructure Lifecycle Management (ILM) offerings from HashiCorp Terraform, Packer, Nomad, and Waypoint help organizations manage their infrastructure at scale with reduced complexity.
Terraform Enterprise improves deployment flexibility with Nomad and OpenShift
Customers can now deploy Terraform Enterprise using Red Hat OpenShift or HashiCorp Nomad runtime platforms.