HashiCraft Holiday Hackathon Wrap Up
See the results of the HashiCraft Holiday Hackathon.
Announcing the HashiCraft Holidays Hackstravaganza
We will be organizing a HashiCraft Holidays Hackstravaganza where you and your fellow tinkerers can use your creativity to showcase one or more of our products in creative and unexpected ways.
Encrypting Data while Preserving Formatting with the Vault Enterprise Transform Secrets Engine
Vault 1.4 Enterprise introduced a new secrets engine called Transform. This post shows you how to implement Transform secrets into a simple API; source code is provided for both the Java and Go programming languages.
Dynamic Database Credentials with Vault and Kubernetes
In this blog post, we will look at how the Vault integration for Kubernetes allows an operator or developer to use metadata annotations to inject dynamically generated database secrets into a Kubernetes pod. The integration automatically handles all the authentication with Vault and the management of the secrets, the application just reads the secrets from the filesystem.
HashiCorp Consul supports Microsoft’s new Service Mesh Interface
Today at KubeCon EU in Barcelona, Microsoft introduced a new specification the Service Mesh Interface (SMI) for implementing service mesh providers into Kubernetes environments. This blog explains how Consul fits into this new specification and how it can be used for Kubernetes environments.
Network segmentation in modern environments
Network segmentation is a highly effective strategy to limit the impact of network intrusion. However, in modern environments such as a cluster scheduler, applications are started and restarted often without operator intervention. This dynamic provisioning results in constantly changing IP addresses, and application ingress ports. Segmenting these dynamic environments using traditional methods of firewalls and routing can be very technically challenging. In this post, we look at this complexity and how a service mesh is a potential solution for secure network traffic in modern dynamic environments.
Resource Targeting in Terraform
In this post, we will discuss another feature of Terraform which can help you make fine grained changes, either to avoid downtime or recover from mistakes in your configuration resource targeting.
Creating a Kubernetes Cluster with AKS and Terraform
We are going to take a look at how we can create a Kubernetes cluster in Azure using the azurerm_kubernetes_cluster resource.
Zero Downtime Updates with HashiCorp Terraform
In this post, we are going to look at two simple features in Terraform that allow us to avoid downtime caused by updates and allow uninterrupted replacement of resources. The examples in this post use the DigitalOcean provider, however, the techniques explained are not specific to any particular provider they are features built into the Terraform core.
HashiCorp Terraform: Modules as Building Blocks for Infrastructure
Operators adopt tools like HashiCorp Terraform to provide a simple workflow for managing infrastructure. Users write configurations and run a few commands to test and apply changes. However, infrastructure management often extends beyond simple configuration and we require a workflow to build, publish, and share customized, validated, and versioned configurations. Successful implementation of this workflow starts with reusable configuration, in this post we will look at modules, the problems they solve, and how you can leverage them to form the building blocks for your infrastructure.
Using Sentinel Policy to enforce continuous deployment windows
In the same way that we can embed Sentinel into a pipeline to enforce policy for Terraform plans, or Vault secrets, we can also enforce policy in a continuous delivery pipeline. In this post, we are going to examine how Sentinel Policy and the Sentinel Simulator can be used to ensure your CD system only deploys your application within a specified time window.
Functions as a Service with Nomad and OpenFaaS
OpenFaaS (or Functions as a Service) is a framework for building serverless functions but with containers. With OpenFaaS you can package any process or container as a serverless function for either Linux or Windows - just bring your Nomad cluster. The project focuses on ease of use through its UI and CLI which can be used to test and monitor functions in tandem with Prometheus enabling auto-scaling.
Continuous Deployment with Nomad and Terraform
This post explores how to use the Nomad Terraform provider to control the lifecycle of a Nomad service. Both HashiCorp Nomad and Terraform allow you to declaratively define infrastructure as code, but they serve different functions in the organization. Nomad schedules and monitors applications, making sure the application stays running and automatically reconciles any failure. Nomad supports rolling deploys to deliver safer convergence. Nomad also integrates with Consul for service discovery and Vault for secrets management. Terraform, on the other hand, is a lifecycle management and provisioning tool. It creates, updates, and destroys the underlying infrastructure which Nomad will use to run applications. But Terraform is much more than a infrastructure tool - Terraform can also manage the process of submitting, updating, and deleting Nomad applications, which will allows modeling your entire infrastructure as code. The Nomad Terraform provider is perfect for continuous delivery for your applications, and in this post we will look at how these tools work seamlessly together to enable this workflow. Nomad provider for HashiCorp Terraform to run jobs with HashiCorp Nomad.
Auto-bootstrapping a Nomad Cluster
In a previous post, we explored how HashiCorp Consul discovers other agents using cloud metadata to bootstrap a cluster. This post looks at HashiCorp Nomad's auto-joining functionality and how we can use Terraform to create an autoscaled cluster.
Consul Auto-Join with Cloud Metadata
We work in a world of distributed systems which operate in rapidly changing environments. Servers come and go, they move across region and distribution groups, and somehow they need to communicate and connect to one another. To solve this problem, HashiCorp created Consul, which among many other things enabled service registry and service discovery. Application instances register themselves with Consul, and dependent instances query Consul to discover each other. Since Consul itself is a distributed system, this creates a chicken-and-egg problem - how do you boostrap your service discovery.