HashiCorp Consul 0.5
We are proud to release Consul 0.5. Consul is a datacenter runtime that provides many of the capabilities needed to run a modern datacenter, such as service discovery, configuration, and orchestration. It's designed to be distributed and highly available and proven to scale to thousands of nodes and services across multiple datacenters.
The last major release of Consul was several months ago and it's incredible stability has allowed us to focus on adding major new features, improving the user experience, and fixing bugs.
Consul 0.5 brings many new features including automated clustering, seamless UI integration via Atlas, enhanced ACLs, simple N+1 deploys, node and service maintenance modes, native HTTP health checks, ephemeral keys, session TTLs, and key rotation among many others.
You can download Consul 0.5 here or view the changelog
Read on to learn more about the major new features in 0.5.
» Automated Clustering
Bootstrapping a Consul cluster has always required providing a well-known IP or DNS address so that an agent can join an existing cluster. This provides a chicken and egg problem for Consul. While this can be solved in a number of ways, Consul 0.5 supports an auto-join mechanism to automatically discover and cluster your Consul agents on any network. This removes the burden of maintaining a well-known address, and simplifies configuration management automation for installing Consul.
This feature in Consul is called "auto-join" and is enabled via a free Atlas account.
Using Atlas auto-join is simple and requires a free Atlas account along with an API token. The -atlas
CLI flag is used to specify the cluster name, and the -atlas-join
flag enables auto-join:
$ export ATLAS_TOKEN=...
$ consul agent -atlas armon/test -atlas-join
==> Consul agent running!
Node name: 'foo'
Datacenter: 'dc1'
Server: false (bootstrap: false)
Client Addr: 127.0.0.2 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 127.0.0.2 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: (Infrastructure: 'armon/test' Join: true)
==> Log data will now stream in as it occurs:
[INFO] serf: EventMemberJoin: foo 127.0.0.2
[INFO] scada-client: connect requested (capability: http)
[INFO] scada-client: auto-joining LAN with [127.0.0.1]
[INFO] agent: (LAN) joining: [127.0.0.1]
[INFO] serf: EventMemberJoin: Armons-MacBook-Air.local 127.0.0.1
The auto-join feature can be used both to join clients together within a datacenter, as well as linking server to server together over the WAN in a global cluster.
» Atlas Integration
Consul 0.5 is the first release that integrates with Atlas, a commercial platform offering by HashiCorp. Atlas provides a single platform to deploy any application on any infrastructure. Integration with Atlas is entirely optional, but it provides new functionality to Consul.
In addition to auto-join being brokered by Atlas, specifying the -atlas
flag will enable a beautiful Consul UI for you automatically.
The Consul UI is available for free with an Atlas account today. The number of nodes/services supported by the UI may be limited by a paid subscription in the future, but the open source Consul UI project will always be maintained as a free official alternative, though with a smaller scope of features.
Future versions of the Atlas Consul integration will enable alerting based on health checks so you can notify PagerDuty, email, SMS, etc. out of the box. This is just a single example of what Atlas integration will bring.
More information can be found in the Consul documentation about enabling Atlas integration with Consul.
» Enhanced ACLs
Consul 0.4 was the first version to introduce ACL support, enforcing access control to the Key/Value store. With Consul 0.5, we're expanding ACL support to also protect service discovery and registration.
Policy is set per ACL token, using either HCL or JSON. The policy language has now been extended to support the service
keyword:
# Default deny all service registrations
service "" {
policy = "read"
}
# Allow registering the web service
service "web" {
policy = "write"
}
With Consul 0.5, the policy language supports specifying read level privileges, but the enforcement is limited to service registration. This means only agents with write
level privileges are allowed to register a service, and any level of privilege can be used for discovery.
Future versions of Consul will extend the enforcement to service discovery as well, allowing fine-grained control of visibility of services.
» Distributed Locking and N+1 Deploys
One of the benefits of Consul is it's support for Sessions and the ability to use them for client-side leader election. However, integrating applications to be Consul-aware and implementing leader election can still be difficult and error prone.
To simplify this, Consul 0.5 adds a new sub-command lock
, which handles leader election transparently for an application. Suppose we want to have a highly available service foo
and we want to deploy it on at least 2 machines, while ensuring no more than one instance is running.
Using Consul lock
makes this trivial:
(node1) $ consul lock service/foo/lock ./foo.py
service foo running
(node2) $ consul lock service/foo/lock ./foo.py
We can run this same command on N
machines, and Consul will ensure only a single instance is running. If instead we want to have more than one instance of foo
running at a time we can change the limit of running instances:
(node1) $ consul lock -n=3 service/foo/lock ./foo.py
service foo running
(node2) $ consul lock -n=3 service/foo/lock ./foo.py
service foo running
(node3) $ consul lock -n=3 service/foo/lock ./foo.py
service foo running
(node4) $ consul lock -n=3 service/foo/lock ./foo.py
By providing the -n
flag we switch the behavior from a distributed lock to a distributed semaphore. For any service that requires N
instances of availability, Consul lock
makes it trivial to do an N+1 style deployment, allowing Consul to manage the complexity so that applications can be unaware.
More details are available in the lock
documentation.
» Maintenance Modes
A common request with Consul has been the ability to put a node or service into a maintenance mode. This can be used to decommission a node, or to avert traffic from a service to prepare for an upgrade.
Consul 0.5 simplifies this by making it a first-class operation. The API has now supports both node and service level checks, but to make it simpler, there is now a maint
sub-command.
As an example, to toggle node maintenance mode:
$ consul maint -enable -reason Testing
Node maintenance is now enabled
$ consul maint
Node:
Name: Armons-MacBook-Air.local
Reason: Testing
$ consul maint -disable
Node maintenance is now disabled
This can be embedded as part of an application deploy process, or for infrastructure automation when setting up or tearing down servers.
More details are available in the maint
documentation.
» HTTP Health Checks
Consul previously supported a script based and TTL based health checking mechanisms. The script based checks were drop-in compatible with Nagios checks, while the TTL checks provided a simple way for Consul-aware applications to integrate.
With Consul 0.5, there is now native HTTP health checks. REST based micro-services is a common pattern, and HTTP health checks can simplify Consul integration for those applications.
Using an HTTP health check is very simple and just requires an http
check path and interval
in the check definition of a service:
{
"name": "foo",
"port": 8000,
"checks": [{
"http": "http://localhost:8000/health",
"interval": "10s"
}]
}
In addition, Consul 0.5 also allows for multiple checks per service, so that HTTP checks can be added to existing checks.
More details are available in the checks documentation.
» Ephemeral Keys
The initial design of Sessions in Consul targeted the use case of client side locks in the Key/Value store. A session could be associated with various locks in the Key/Value store, and Consul would automatically release those locks when the session was invalidated.
A frequent request was to support ephemeral keys similar to how ZooKeeper functions. To support this, Sessions in Consul have been extended to support a configurable Behavior
. The default behavior is "release" which causes any locks held in a session to be released when the session was invalided. This is backwards compatible behavior. With Consul 0.5, the behavior can now be set to "delete", causing any held keys to be deleted when the session is invalidated.
Here is an example of the new behavior:
$ curl -X PUT -d '{"behavior":"delete"}' localhost:8500/v1/session/create
{"ID":"acf52e93-6b45-6297-6425-bb97a340b144"}
$ curl -X PUT localhost:8500/v1/kv/test?acquire=acf52e93-6b45-6297-6425-bb97a340b144
true
$ curl -X PUT localhost:8500/v1/session/destroy/acf52e93-6b45-6297-6425-bb97a340b144
true
$ curl localhost:8500/v1/kv/test
The new behavior of Sessions is documented here.
» Session TTLs
One of the key design decisions of Consul was to make use of the gossip protocol as a failure detector for Sessions instead of a heartbeat or TTL based approach. This overcomes some fundamental scalability issues associated with those methods, but increases the complexity of using Consul in cases where the gossip mechanism is not available.
To enable sessions to be used in a broader range of applications, Consul 0.5 now adds support for TTLs on Sessions. These are modeled on Google Chubby and can act as an alternative to other health checks. Clients create a sessions with a TTL and are responsible for renewing the session before the TTL expires to keep the session valid.
The new behavior of Sessions is documented here.
» Key Rotation
Consul previously supported encryption for the gossip protocol but provided no way to support multiple keys or key rotation. Inspired by the equivalent in Serf, Consul 0.5 adds the keyring
sub-command. This can be used to install new keys, view the installed keys, and remove keys from use.
These new features can be used to easily rotate the encryption key:
$ NEW=`consul keygen`
$ consul keyring -install=$NEW
==> Installing new gossip encryption key...
==> Done!
$ consul keyring -use=$NEW
==> Changing primary gossip encryption key...
==> Done!
$ consul keyring -remove=$OLD
==> Removing gossip encryption key...
==> Done!
More details are available in the keyring
documentation.
» Upgrade Details
Consul 0.5 introduces some new internal commands that are not backwards compatible with previous versions. This means that a Consul 0.5 server node that is the leader cannot be mixed with older versions of servers. Consul 0.5 servers can be run with older clients however.
More details are available on the upgrade process here.
Additionally, any users with the acl_default_policy
set to "deny" must update their policies to handle the service enforcement prior to upgrade. Otherwise, service registration will be denied by ACLs.
We strive to provide the highest level of backwards compatibility possible while adding new features, but regretfully some releases will have more complex upgrade conditions.
» Roadmap
Consul 0.5 is a huge release that adds lots of new features, improvements, stability and bug fixes. As a result, we expect that there will be some new issues which will be address in point released following.
Following that, Consul 0.6 will be focused on improving performance, improving fidelity of blocking queries, and moving to pure Go. Until then, we hope you enjoy Consul 0.5 as much as we do!
If you experience any issues, please report them on GitHub.
Sign up for the latest HashiCorp news
More blog posts like this one
HashiCorp at AWS re:Invent: Your blueprint to cloud success
If you’re attending AWS re:Invent in Las Vegas, Dec. 2 - Dec. 6th, visit us for breakout sessions, expert talks, and product demos to learn how to take a unified approach to Infrastructure and Security Lifecycle Management.
Consul 1.20 improves multi-tenancy, metrics, and OpenShift deployment
HashiCorp Consul 1.20 is a significant upgrade for the Kubernetes operator and developer experience, including better multi-tenant service discovery, catalog registration metrics, and secure OpenShift integration.
New SLM offerings for Vault, Boundary, and Consul at HashiConf 2024 make security easier
The latest Security Lifecycle Management (SLM) features from HashiCorp Vault, Boundary, and Consul help organizations offer a smoother path to better security practices for developers.