Seven years ago, managing security and compliance was fairly straightforward at Duke Energy. Most of the company’s applications ran on-premises. Security teams had complete control over almost everything.
But when the 100-year-old, regulated utility moved to the cloud, all of that changed. Enforcing security without hampering delivery speed became much more complicated. In our conversation with Duke Energy, they told us about their requirements around maintaining speed under regulation.
In this post, you’ll see how Duke Energy used infrastructure management standardization, policy as code, and centralized secrets management to scale cloud security without slowing developers down. We’ll wrap up with six lessons learned from their story.
»From datacenters to the cloud
Duke Energy began its cloud journey much like many other organizations — with the realization that aging datacenters limited agility, and the cloud had so much more to offer.
The energy sector was changing, and if the company was going to “build a smarter, cleaner energy future”, it needed new digital services and a more modern computing platform.
Moving to the cloud, however, meant inheriting new security responsibilities that could only be solved through automation, governance, and a standardized developer experience.
“As we started moving to the cloud, security was one of the biggest concerns for us. We really had to be careful about how applications were deployed, where pipelines got secrets from, and how they were stored.”
— Travis Rutledge, Senior Cloud Engineer, Duke Energy
»Standardizing on Terraform Enterprise and Sentinel
Early on, Duke Energy adopted the community version of Terraform to bring consistency to cloud provisioning. But the platform team manually reviewed all of the pull requests made by product teams as a security checkpoint. This caused two problems, and both increased risk:
- As teams multiplied, the process became unscalable
- Humans often made mistakes
“We had very preliminary service control policies, but nothing that stopped actions at the infrastructure layer.”
— Travis Rutledge, Senior Cloud Engineer, Duke Energy
To address this issue, the company introduced an open source security policy tool to help. But they soon outgrew the tool’s capabilities — they discovered that the tool had no way of evaluating all the input or local variables in a Terraform resource, which meant someone could unknowingly introduce an attack vector, and the policy wouldn’t catch it.
That’s when they implemented Terraform Enterprise and Sentinel. Sentinel evaluates Terraform plans before infrastructure is applied, acting as a security gate that prevents unsafe resources — such as S3 buckets accidentally left open to the public — from ever being deployed.
»From secrets everywhere to managed secrets
One major challenge in Duke Energy’s early cloud adoption was secret sprawl: secrets were siloed and decentralized in multiple systems, depending on the team or the use case. This was not best practice.
They implemented HashiCorp Vault to bring more structure to secrets management, which alleviated most of the sprawl, but some AWS applications still need secrets in AWS Secrets Manager as well.
“As we grew in the cloud, we really didn’t have a central secret store, even though we were using Vault. We needed to define exactly what Vault is the standard for and still allow secrets to get where they need to go for things like Secrets Manager, Azure Key Vault, you name it.”
— Travis Rutledge, Senior Cloud Engineer, Duke Energy
Vault also has the capability to sync secrets and integrate a variety of cloud native secret stores so that secrets can still be centrally managed in one cloud-agnostic repository.
In addition to working on the centralization problem, the company is systematically using Vault to eliminate long-lived IAM user credentials and replace them with short-lived, just-in-time credentials to reduce the risk of them falling into the wrong hands. Even if a short-lived credential (also called a “dynamic secret”) falls into the wrong hands, the attacker won’t have much time to cause harm.
»The rise of no-code infrastructure
As cloud usage grew, the company hit a scaling wall: developers were spending more time debugging Sentinel policies and writing Terraform code than building applications.
To reduce cognitive load while preserving security, the company began using Terraform no-code modules — pre-built, hardened infrastructure builds exposed through a simplified interface. This allowed developers to stand up services like API Gateways or S3 without touching Terraform code at all.
“We’re implementing this [no-code modules] now. Our goal is to make things easy. For example, if developers want an S3 bucket, they just click a few buttons, fill out a couple of parameters, and they’re off to the races.”
— Travis Rutledge, Senior Cloud Engineer, Duke Energy
»Building a full lifecycle platform: From Day 0 to Day 2+
Duke Energy’s cloud journey started with teams reinventing the wheel across the organization. They might be deploying the same thing, but they build it 15 different bespoke ways across the org.
That’s more work for those teams, and it’s more work for the platform team to support and do maintenance across those different processes. What the platform team wanted was a more structured model to govern cloud infrastructure.
By standardizing multiple processes into one, they save time for developers and operators across the org, while also making everything more maintainable for the central platform team. This is the end-to-end, standardized infrastructure and security automation workflow that they’ve built so far:
»Day 0 — Getting started fast
Developers receive:
- Pre-seeded GitHub repositories with Terraform authentication
- Backend configuration and patterns already embedded
This allows them to focus on development, not platform plumbing. They’re also able to deploy quickly to non-production environments for fast prototyping.
The company wants the next evolution to be a marketplace-style “vending machine” using no-code modules and something like HCP Waypoint so developers can pick the resources they need and deploy securely, without having to worry about Terraform configurations for common use cases.
»Day 1 — Deploying with guardrails
When teams are ready to deploy to production, they go right to Day 1. Everything is built with Terraform standardized modules and checked with Sentinel. Manual ClickOps deployments are limited to isolated sandbox accounts only. No circumvention of governance controls, no break-glass configuration drift.
»Day 2+ — Operating securely at scale
This is where most enterprises struggle, and Duke Energy is no exception. Patching databases, rotating secrets, upgrading runtimes, and remediating vulnerabilities is time-consuming and inconsistent across teams.
To address this, the company is exploring ways to automate Day 2 operations by driving Vault adoption and using things like no-code modules, GitHub Actions, and Waypoint Actions so that users don’t need to “get into the box” to manage those things. The goal is to make environments more consistent, secure, and supportable.
»A better developer experience, without sacrificing security
Duke Energy’s story shows that happy developers and strong security can coexist. A central theme in the company’s journey is that security and developer productivity can reinforce each other when automation is applied thoughtfully.
By shifting governance into policy as code and abstracting infrastructure behind no-code modules, Duke Energy reduced developer friction while strengthening security.
»How do Duke Energy’s leaders know if developer experience is getting better?
They ask. Leaders monitor sentiment across teams, run periodic surveys, and listen carefully to the “temperature” of the developer community.
When things are going well, Rutledge says, “There’s less noise, smoother deployments, and fewer urgent requests.”
To learn more about how you can improve developer experience while ensuring security and regulatory compliance, read our cheatsheet: 13 proven best practices to build your DevEx program.
»Duke Energy’s next steps
Looking forward, Duke Energy aims to industrialize the entire infrastructure lifecycle:
- A full marketplace of no-code modules for common patterns
- Migrating from Terraform Enterprise (self-managed) to HCP Terraform (SaaS)
- Automated Day 2 remediation and operations
- Secure dynamic credentials through workload identity
- Multi-cloud governance with cloud-agnostic policies in Sentinel
- Faster onboarding of emerging services like AWS Bedrock
“We have a bold vision. We want to be the Amazon of infrastructure. If you need a thing, you can have it the same day.”
— Travis Rutledge, Senior Cloud Engineer, Duke Energy
»Lessons learned
Duke Energy’s story offers several clear lessons for organizations scaling cloud securely:
»1. Standardize early and ruthlessly
Every shortcut you tolerate early becomes a tax on future scalability.
»2. Automate governance, not just provisioning
Policy as code prevents configuration drift and human error at scale.
»3. Reduce cognitive load for developers
No-code and abstractions allow teams to deploy safely without mastering every AWS, Azure,GCP, or on-prem nuance.
»4. Centralize secrets and eliminate static credentials
Dynamic workload identity is becoming essential for both security and auditability.
»5. Invest deeply in Day 2+ automation
Most breaches, outages, and inefficiencies happen after deployment, not during it.
»6. Build for long-term maintainability
“If we had focused on maintainability from day one, we would have been farther in our journey today,” says Rutledge.
»Take the next step
Duke Energy’s cloud security journey demonstrates what is possible when a large, regulated enterprise rethinks how infrastructure should be built and operated. With a focus on four key areas:
- Standardization
- Policy as code
- Secret lifecycle management
- No-code abstractions
…they’ve built a platform that scales securely without slowing innovation.
You can watch our full conversation with Duke Energy here: Cloud in action: How Duke Energy scales cloud security.







