Skip to main content

Leveling Up the Terraform Provider for Nomad

At HashiCorp, we’ve always said that all of our projects should work well separately, but better together. This is certainly true for HashiCorp Terraform and HashiCorp Nomad. Terraform provisions the infrastructure and Nomad schedules and deploys applications. When deploying Nomad, you may do this however you like, and the Nomad team strives to make that easy to do, but if you opt to use Terraform to manage the servers that Nomad runs on, we believe that should offer a first-class experience.

The use of Terraform and Nomad has been discussed in blog posts like Continuous Deployment with Nomad and Terraform and Auto-bootstrapping a Nomad Cluster over the last year when the Terraform provider for Nomad was initially released. In this post we are excited to explore the possibilities a bit, and embrace some of the Nomad Enterprise features with Terraform.

»Getting Oriented

Before we dive into the features, let’s take this opportunity to talk about what the provider is meant to help with. We like to think of Terraform as something you’ll run no more than every few days, ideally every few weeks. It should be for slow-moving parts of your infrastructure, things that are fixtures that stick around for a while. Things that are short-lived or change frequently are probably better handled by a tool like Nomad.

As we look at the new capabilities of the Nomad provider, think about gearing them towards setting up Nomad, or initializing it. The provider is not meant as a general purpose CLI to interact with Nomad; after all, Nomad already has one of those, and it’s fantastic. Instead the provider is meant to make sure your cluster is set up just how you want it, so you can spin up copies of your cluster at the drop of a hat.

»Scheduling a Job

The Nomad provider has a nomad_job resource that schedules a job in your cluster. We recommend using it for setting up jobs that are part of the environment or background of your cluster, not for managing application components. Think of it as managing your cluster, not your application.

As always, you can schedule a job by specifying its jobspec:

resource "nomad_job" "app" {
  jobspec = <<EOT
job "foo" {
  datacenters = ["dc1"]
  type = "service"
  group "foo" {
    task "foo" {
      driver = "raw_exec"
      config {
        command = "/bin/sleep"
        args = ["1"]
      }

      resources {
        cpu = 100
        memory = 10
      }

      logs {
        max_files = 3
        max_file_size = 10
      }
    }
  }
}
EOT
}

But we have a few new additions to the nomad_job resource now. You can specify a Vault token on your provider config to be used on jobs that have Vault policies:

provider “nomad” {
  # address, etc. go here
  vault_token = “abc123”
}

You can also, for organizations using Sentinel, override soft failures of Sentinel policies:

resource "nomad_job" "app" {
  policy_override = true
  jobspec = <<EOT
# jobspec that Sentinel wouldn’t permit
EOT
}

»Using Regions

If you find yourself wanting to take some action on every region in your Nomad cluster, like scheduling a job or standing up more infrastructure, the Nomad provider’s new nomad_regions data source can be combined with Terraform’s built-in count to help with that:

data "nomad_regions" "list" {}

variable "region_dc_map" {
  default = {
    # with the right naming scheme, this step is unnecessary
    "my-region" = "the-datacenter-I-want-the-job-in”
  }
}

resource "template_file" "regional_jobs" {
  count = "${length(data.nomad_regions.list.regions)}"

  template = <<EOT
job "foo" {
  datacenters = ["$${dc}"]
  type = "service"
  group "foo" {
    task "foo" {
      driver = "exec"
      config {
        command = "/bin/sleep"
        args = ["1"]
      }

      resources {
        cpu = 100
        memory = 10
      }

      logs {
        max_files = 3
        max_file_size = 10
      }
    }
  }
}
EOT

  vars {
    dc = "${var.region_dc_map[data.nomad_regions.list.regions[count.index]]}"
  }
}

resource "nomad_job" "regional" {
  count   = "${length(data.nomad_regions.list.regions)}"
  jobspec = "${template_file.regional_jobs.rendered}"
}

»Creating ACL Policies and Tokens

Terraform can also now manage the ACL policies that were introduced with Nomad 0.7, ensuring that they’re present in your cluster and have not been modified:

resource "nomad_acl_policy" "dev" {
  name        = "dev"
  description = "Grant devs ability to submit and read jobs in the dev environment"

  rules_hcl = <<EOT
namespace "dev" {
  policy = "read"
  capabilities = ["submit-job", "read-job"]
}
EOT
}

If you need to generate ACL tokens for your infrastructure, Terraform can help with that, too:

resource "nomad_acl_token" "ci" {
  name     = "CI access"
  type     = "client"
  policies = ["ci"]
  global   = false
}

# these could also be built into your cloud-init, or used however you like
output "accessor" {
  value = "${nomad_acl_token.ci.accessor_id}"
}

output "token" {
  value = "${nomad_acl_token.ci.secret_id}"
}

»Managing Namespaces

We’ve introduced a new nomad_namespace resource that will ensure a namespace is present in your Nomad Enterprise cluster:

resource "nomad_namespace" "foo" {
  name        = "dev"
  description = "Development environment"
  quota       = "dev"
}

»Managing Quota Specifications

For Enterprise customers wanting to manage the quota limits on their namespaces, the new nomad_quota_specification resource can make sure the quota specification is set correctly, and the nomad_namespace resource can make sure the namespace has the right quotas applied:

resource "nomad_quota_specification" "test" {
  name        = "dev"
  description = "Limit resources used for development"

  limits {
    region = "my-nomad-region"

    region_limit {
      cpu = 2500
    }
  }
}

»Managing Sentinel Policies

With Sentinel support baked into Nomad Enterprise, you can enforce policies on your cluster, ensuring that your application adheres to your organization’s policies. With the Nomad provider, you can take that a step further, ensuring that your cluster is kept up-to-date with your policies, and that no accidental changes have occurred or policies have been left out:

resource "nomad_sentinel_policy" "exec" {
  name        = "exec-only"
  description = "Only allow exec to run jobs"

  policy = <<EOT
main = rule { all_drivers_exec }

# all_drivers_exec checks that all the drivers in use are exec
all_drivers_exec = rule {
    all job.task_groups as tg {
        all tg.tasks as task {
            task.driver is "exec"
        }
    }
}
EOT

  enforcement_level = "soft-mandatory"
  scope             = "submit-job"
}

»More to Come

We’re very excited about this release of the Nomad provider, and we’re even more excited for what’s coming in the pipeline. As straightforward as Nomad is to set up and operate, we want the provider to be able to remove the last few manual steps from the process, and make sure the only commands you need are terraform apply and nomad run. Give it a try, and let us know if you run into any bugs or have ideas for how to improve!

To learn more about Nomad, visit https://www.hashicorp.com/products/nomad. If you would like to learn more about Terraform, visit https://www.hashicorp.com/products/terraform.


Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.