Announcing Terraform AWS Cloud Control Provider Tech Preview
This new provider for HashiCorp Terraform — built around the AWS Cloud Control API — is designed to bring new services to Terraform faster.
The HashiCorp Terraform AWS Cloud Control Provider, currently in tech preview, aims to bring Amazon Web Services (AWS) resources to Terraform users faster. The new provider is automatically generated, which means new features and services on AWS can be supported right away. The AWS Cloud Control provider supports hundreds of AWS resources, with more support being added as AWS service teams adopt the Cloud Control API standard.
For Terraform users managing infrastructure on AWS, we expect this new provider will be used alongside the existing AWS provider, which will continue to be maintained. Given the ability to automatically support new features and services, this new provider will increase the resource coverage and significantly reduce the time it takes to support new capabilities. We are excited for this to improve the experience and avoid the frustration caused by coverage gaps.
» AWS Cloud Control API
AWS Cloud Control API makes it easy for developers to manage their cloud infrastructure in a consistent manner and to leverage the latest AWS capabilities faster by providing a unified set of API actions as well as common input parameters and error types across AWS services. As a result, any resource type published to the CloudFormation Public Registry exposes a standard JSON schema and can be acted upon by this interface. AWS Cloud Control API is available in all commercial regions, except China.
For more information about AWS Cloud Control API, visit the user guide and documentation.
» How the AWS Cloud Control Terraform Works
Because AWS Cloud Control API provides an abstraction layer for resource providers to proxy through when interacting with AWS service APIs, we are able to automatically generate the codebase for the AWS Cloud Control Terraform provider. Generating the provider allows us to provide new resources faster because we won’t have to write boilerplate and standard resource implementations for each new service. The maintainers of the Terraform AWS Cloud Control provider can instead focus on user experience upgrades and performance improvements.
» Use Cases
While the Terraform AWS Cloud Control Provider is still in tech preview, we suggest practitioners use this provider to:
- Experiment with new services before they are added to the Terraform AWS provider
- Test configurations in development or staging environments
- Build out proof-of-concept deployments in conjunction with the Terraform AWS provider, such as using Amazon AppFlow with Amazon S3 as illustrated in the example later in this blog post
Until the conclusion of the tech preview, we suggest using the Terraform AWS provider for production use across critical services. We will be evaluating the tech preview and will rely on community feedback to inform our decisions regarding general availability.
» Requirements
In order to use the new Terraform AWS Cloud Control provider, you will need:
- Terraform 1.0. or later
- An active AWS account in any commercial region, excluding China
» Configuring the Provider
In order to configure the provider, you will need to employ the configuration blocks shown here, while specifying your preferred region:
terraform {
required_providers {
awscc = {
source = "hashicorp/awscc"
version = "~> 0.1"
}
}
}
# Configure the AWS Cloud Control Provider
provider "awscc" {
region = "us-east-1"
}
» Authentication
To use the AWS Cloud Control provider, you will need to authenticate with your AWS account. You can use any authentication method available in the AWS SDK, including:
- Environment variables
- Shared credentials file
- AWS CodeBuild, Amazon ECS, and Amazon EKS roles
- Custom User-Agents
- EC2 instance metadata service
- The AssumeRole function in AWS IAM
For more information and examples, please refer to the provider documentation on the Terraform Registry.
» Example Usage
To see how it all fits together, check out this example configuration using Amazon AppFlow. First, set up an AppFlow flow
using the Terraform AWS Cloud Control API provider (awscc
). Then set up an Amazon S3 bucket to store the flow
data using the Terraform AWS provider (aws
).
This example demonstrates how you can use the core resources in the aws
provider to supplement the new services in the awscc
provider.
» Using Two Providers
You’ll need to configure both providers in the same configuration file. Both the awscc
and aws
providers must be initialized for this example to work.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
awscc = {
source = "hashicorp/awscc"
}
}
}
provider "aws" {
region = "us-west-2"
}
provider "awscc" {
region = "us-west-2"
}
» Setting Up S3 Buckets
Set up your S3 buckets using the aws
provider. Designate a bucket for both your source and destination.
resource "aws_s3_bucket" "source" {
bucket = "awscc-appflow-demo-source"
acl = "private"
}
resource "aws_s3_bucket" "destination" {
bucket = "awscc-appflow-demo-destination"
acl = "private"
}
» Creating an AppFlow Flow
When creating a flow
, you will need to provide the flow_name
, connector_type
, tasks
, and trigger_config
. Other optional attributes, such as tags
can also be set on the resource.
To store flow data in S3, you must provide the bucket_name
within the destination_connector_properties
. You can also optionally provide the bucket_prefix
and the s3_output_config
.
resource "awscc_appflow_flow" "flow" {
flow_name = "s3-to-s3-flow"
source_flow_config = {
connector_type = "S3"
source_connector_properties = {
s3 = {
bucket_name = aws_s3_bucket.source.bucket
bucket_prefix = "af"
}
}
}
destination_flow_config_list = [
{
connector_type = "S3"
destination_connector_properties = {
s3 = {
bucket_name = aws_s3_bucket.destination.bucket
}
}
}
]
tasks = [
{
source_fields = [
"column_one",
"column_two"
]
connector_operator = {
s3 = "PROJECTION"
}
task_type = "Filter"
task_properties = []
},
{
source_fields = [
"column_one,column_two"
]
connector_operator = {
s3 = "NO_OP"
}
destination_field = "column_cat"
task_type = "Map",
task_properties = [{
key = "DESTINATION_DATA_TYPE"
value = "string"
}]
},
{
source_fields = [
"column_one",
"column_two"
]
connector_operator = {
s3 = "NO_OP"
}
destination_field = "column_one,column_two"
task_type = "Merge"
task_properties = [{
key = "CONCAT_FORMAT"
value = "$${column_one} $${column_two}"
}]
}
]
trigger_config = {
trigger_type = "Scheduled"
trigger_properties = {
schedule_expression = "rate(1minutes)"
}
}
}
Note: At this time the AWS Cloud Control API does not offer the ability to schedule or start flows. To schedule your configured flow, you will need to use the AWS AppFlow console. For more information about how to use Amazon AppFlow and the various connection and destination types, visit the Amazon AppFlow documentation.
For additional examples visit the HashiCorp Learn Guide.
» Tell Us What You Think
We would love to hear your feedback on this project. You can report bugs and request features or enhancements for the AWS Cloud Control provider by opening an issue on our GitHub repository.
For AWS service coverage requests, please create an issue on the CloudFormation Public Coverage Roadmap.
For documentation and examples, please visit the Terraform Registry and the HashiCorp Learn platform.
Sign up for the latest HashiCorp news
More blog posts like this one
Fix the developers vs. security conflict by shifting further left
Resolve the friction between dev and security teams with platform-led workflows that make cloud security seamless and scalable.
HashiCorp at AWS re:Invent: Your blueprint to cloud success
If you’re attending AWS re:Invent in Las Vegas, Dec. 2 - Dec. 6th, visit us for breakout sessions, expert talks, and product demos to learn how to take a unified approach to Infrastructure and Security Lifecycle Management.
Speed up app delivery with automated cancellation of plan-only Terraform runs
Automatic cancellation of plan-only runs allows customers to easily cancel any unfinished runs for outdated commits to speed up application delivery.