Skip to main content
Presentation

Terraform and its Extensible Provider Architecture

HashiCorp Terraform's not-so-secret sauce is its Provider API. This allows Terraform to support every popular cloud infrastructure, and to be extensible to new ones.

Terraform Providers are behind the Terraform adage, provision any infrastructure. Terraform has an expansive ecosystem of providers to provision resources for cloud, services, platforms, and more. Basically anything with an API can have a provider built for it.

In this talk, Clint Shryock will discuss the internals of Terraform—graph, state, plugins—and the anatomy of a Provider. He'll describe the process for building a provider, and then we'll have a demo exploring the possibilities for managing your infrastructure and applications.

Speaker

Transcript

Today I'm talking about the Extensible Provider Architecture. The parts here, Terraform, Provider Architecture, Version 10 gets its own special little note cause it has some implications, and then Demos, which is gonna be a great time.

One and two and three there, probably going to go kind of quick, I'll try to talk slow. Demo is ... we're going to roll with it. It will probably work. It'll probably demonstrate what I want it to. Just keep an open mind when I say, when I talk about Terraform and extensibility, it's kind of a loaded term.

Part 1: Terraform

Looking at HashiCorp's mission: provision, secure, connect and run any infrastructure for any application. Terraform's goal is to be the provisioning part of that mission.

Terraform is a tool for infrastructure as code. Our goal is to write, plan and create infrastructure as code. I don't have notes, I wanted notes. Okay, we don't have notes. Okay, we're going to figure it out then. We want to create infrastructure as code, we have a lot of providers, these are just some of them, a very, very small number of them. Amazon, Azure, GitHub, and we cover all sorts of things, like infrastructure as a service, platforms as a service, software as a service, metal, probably some other things too.

It's not just ... I think I've heard before, people think Terraform is a tool for working with Amazon, and it's much, much, much, more than that. Infrastructure As Code. With Terraform, you get a unified view of all of your infrastructure, not just your Amazon, we're talking DNS records, your Haruku applications, and it makes it really easy to compose these into a unified view of everything.

With interpolation, we can define relationships between a DNS entry and an AWS instance, and we allow operators to safely iterate on infrastructure over time. You can start very small and begin to add things, and with Terraform's workflow, you are able to see changes incrementally, and make them safely, and verify that that's what actually, what you want to do.

With Terraform, you get one workflow, many clouds. We want to abstract working with cloud providers into a unified way of doing it, so you don't have to be intimately aware of each platform web console, you just have a single interface, which is Terraform.

Key features

Terraform is open source. Who here has used Terraform? Most of you have probably use Terraform.

Has anyone contributed code to Terraform? Thank you. That's awesome. And even if not code, if it's like a bug or anything, thank you very much, that's awesome. Terraform has a huge community, and we are so grateful for that. Open-source Terraform uses HCL, which stands for HashCorp Configuration Language. It's meant to be a human-readable, but machine-edible configuration language, so that it's easy for operators to use.

Terraform uses a dependency graph, and what that means is, it reads your configuration, and it constructs a structure, understanding what parts of your configuration need to come first, and what parts can come later, and what parts are unrelated. so that when you're creating your infrastructure, it can create the unrelated things in parallel, which will spin up the process. Instead of going sequentially top-down, it understands that it can fan out and create the unrelated things at the same time.

We have a two-phase way of doing things, Terraform Plan and Terraform Apply. You can skip straight to Apply if you're feeling lucky. But Terraform Plan helps you iterate slowly and safely. You can add a change to your Terraform configuration, and hit Terraform Plan, and it'll tell you exactly the changes it's going to make. That gives operators an opportunity to go slowly, but also catch things, what we call drift.

You can actually just run Terraform Plan on your configuration, not even adding anything, just to make sure everything is still that way, and you can find those little security group rules that so and so added from the console, and they went around you and didn't use Terraform. It allows you to tech changes in your infrastructure, and go back ... I saw you shaking your head, no one does that, right? It allows you to find those things and correct them if need be.

In Terraform Enterprise you get collaboration, history, audit trail. Terraform does have built in support for collaboration in the form of, you get remote state. Instead of having the state on your machine, you can actually share that with your colleagues. Over time, in the teams size, you need a better collaboration feature, which is where Enterprise comes in there. All right.

Terraform over the years

4.0 is right about when I started there. This is a graph of resources and providers. You see providers have been slowly on the uptick. On 0.9, we had 65 providers, I think. Resources on the other hand, are ballooning out of control, in a good way of course. At 0.9, I think we had over, I wanna say, over 600, over 600 supported resources. I don't think that includes data sources at all.

This speaks really well to Terraform's extensibility, in that it's very easy to add these things. By having Terraform core focus on what it needs to, and Terraform providers having a flexible, well-defined architecture there, it's really easy to add on more resources and more providers.

Terraform itself, if you're not familiar, is a single binary written in Go. We chose Go because we really liked the language, and it lends itself very well to a tool like this. You can compile Go for several different platforms. Unix platforms, borrowing Linux, BSD platforms, and also Windows platforms. You can compile that from a single source of code, and it'll run on all of those.

We use a provider provisioner structure, or architecture ... Okay, sorry, I read that wrong. It's a plug-in architecture, Terraform core is a binary, and it communicates with provisioners or plug-ins over RPC. It launches sub-processes and communicates with them. The important part there is, it's just easy to extend it because ... Sorry, it's easy to add your own plug-ins there.

We are now split into Terraform core and Terraform providers, they are separate there, physically separate. I'll cover that in 0.10. Terraform, as I mentioned earlier, has a graph. Specifically a directed acyclic graph, which means you can't loop, that would be bad, because then Terraform doesn't know where to start. Terraform will detect that if you accidentally add a loop in there.

Big picture here, you can think of Terraform as this kind of structure here. We've got core, which is in charge of configuration, and state, and graph, and it's talking to providers. Providers are conveniently on the outside of there, but to a user it's all kind of one thing. Core responsibilities are reading configuration, did you want a picture of that? Okay, all right. The slides will be posted later.

Core responsibilities:

Reading configuration, managing the state that you have, reading your configuration, and reading the interpolation there, and configuration ... I'll show you what interpolation means if you're not familiar, and figuring out the dependencies there by understanding that this resource has to come before this resource, constructing the graph. It reads and discovers, communicates with plug-ins, and it offers the plan and apply. Kind of another view here, Terraform talks to it's plug-ins, in this case the providers, and the providers talk to upstream providers.

Terraform core itself, is concerned with graph type ideas. Like, what's the different between the desired state and the current state, applying those changes, and also refreshing our state. Those are the concerns that core has. We'll cover provider concerns in a little bit. I guess we'll just go to a slide that doesn't have that. I'm hitting back, that would explain what happened there, sorry. Again, this is actually kind of a duplicate. Terraform reads the configuration, makes the DAG.

What we end up, is structures like this. Actually reads top down, so the root is just the very base of what's going on. In this example, we've got a DNS record that points to an Amazon instance. Terraform will read the configuration, understand it needs to instantiate the Amazon and DNSimple, providers, and then at first it needs to create the DNS record, in order for the Amazon instance to then use that record. I think I said that right. That's actually backwards maybe, but you get the point.

Moving on to...

Provider architectures

Terraform's goal is to provide Infrastructure as Code, help you write and manage it. Provider goals then handle the actual provisioning on any infrastructure service or cloud. The great thing about this architecture, is the core itself doesn't have to worry about these provider level things. Core doesn't understand that with Amazon, you have five different ways to authenticate, and how to manage that. Core just sees that as a block. Core doesn't understand that with an instance, you can have several different things, or that there's caveats that destroy an instance, and you need to disassociate these other things. It just sees it as a block in the graph. Core doesn't care about that stuff. It offloads all that information to providers.

Again, here's some of our provider stuff. Provider responsibilities. Detailed knowledge of the specific provider. That's like authentication, inpoints, other various configurations. Providers then define the resources. A provider will detail, "I have support for all of these resources, heres where they are, this is their schemas." The resource itself, defines an abstraction around the life cycle management of this cloud unit, thing. Like an Amazon instance, DNS record, a GIThub team, a GIThub user, a Heroku app. The resources themselves contain that information.

Where I said earlier, Terraform core is concerned with diff, apply, and refresh. Terraform providers are defined in terms of create, read, update, and delete. If you've ever written a controller in an MBC application, those are all gonna be very familiar to you. Terraform, the plugin provider architecture, gives you a thing called helper schema, which helps us to find resources in just these terms. We give the provider resource a name, and then we define these methods, then core handles the state transition. Core will then know that, for this resources I'm creating it, so I'm going to execute the create method. It doesn't actually care what that method is, it just used RPC to invoke it. Then, the providers handle the actual specifics. Providers themselves are actually binaries. You can compile a provider all in its own, and you can execute it, it'll just say, "I don't make sense outside of Terraform," but it is its own binary.

Terraform core will automatically discover and grab these binaries, and as I mentioned earlier, we use helper schema to define the cycles there. What's next? Okay, we're gonna look at the folder structure of what a provider looks like. It's at minimum, we recommend about three files, right? You need a main.go, because that's how Go does things, and that's how you define it as a binary, provider.go, defines provider specifics, usually authentication and connecting with an SDK, we'll see that in a minute. Then, you have your resources or data sources. If you don't know what a data source is, it's like a resource that only has ... it's like that, except it only has a read method. That actually ties into the extensibility of Terraform, in that it allows you to access infrastructure that you can't directly manage. Either permission based, like your team does not have permissions to create or destroy, but you can consume this information. Or, perhaps the provider itself doesn't really lend itself, its API doesn't really let you do that.

Concrete example of a good data source would be, suppose one team's in charge of creating a base AMI using Packer, but you yourself aren't allowed to actually modify it, so you can actually read that with a data source and still inject it into your infrastructure, as long as the team that makes it gives you the permissions to do so. Looking at provider [inaudible], we define the structure of what the provider itself needs to set itself up. In this very simple example, all we need is an access key. We'll just put it in, that's an API key. Some providers are a lot more complicated, they allow you to do a file with configuration, and authentication, but they can be as simple as just an access key. We also define our resources, and not shown would be a data source map. It's a mapping of, this is a resource support, in the name, that's the name that would appear in configuration, and then the method, or the resource definition that matches to it.

Looking at ... Let me go back. That sc cloud instance maps to this resource super cloud instance. This is what that would look that. This is an actual definition of a resource. We use the helper schema and we just return a resource with these defined methods. The methods aren't obviously show, but that's all a resource is there. You define the read, how to read it, how to create it, how to update it, and how to delete it. Optionally, you don't even have to have an update, if all the attributes defined ... I don't show the schema. If all the attributes defined are what's called force new, which means changing them would force a recreation of the resource, then you don't even need an update. Just every change we'll just hit the delete and then create. Heres an example of what I mentioned earlier as a data source. I just defined the read, and there's an example of defining the schema. Where you say it has an ID, it has a name, all that stuff. Then back to main.go, where we demonstrate just creating the binary.

This is really simple here. The Terraform plug-in package there, just gives you tools to hook in. So, all you have to do is say, I'm calling my provider method, which ... I can go back and show. That's the provider method right there, that returns the structure that says, "I need a access key." That's the provider function that runs it. When Terraform can discover this plug-in, it just invokes this method, and then the communication, the backend channel of communicating over RPC, is handled all for you.

Version 10

In the beginning, Terraform was all one big repository. All of the binaries, all of the plug-in were kept in a single repository. In the initial releases, when you were to download a new version of Terraform, you would get a zip file, and you would unpack it, and it would have a Terraform binary, and it would have a binary for every provisioner we had. That was how we packaged things at the time. We had this model, we had Terraform, and we have all these things, but they weren't actually inside the Terraform binary, they were just alongside it, and the discovering mechanism would find them there, but you had to copy them all into your path. As we grew, that presented a big, big problem, because you can see, right around version 7, we had 400 resources packed into ... I think it was 50 providers. Imagine downloading a binary, or downloading a zip file, unpacking it, and having 50 provider binaries in there, 40 of which you might not use yet.

That packing was kind of hard. With version 7, we re architected it, so that they were still separate binaries, but they were actually compiled into the Terraform binary. The benefit there was, that the Terraform binary did not actually expand, in the same way that you would think of adding those sizes together, but it made it real easy for users, because then they just had to download a new version of Terraform. That was great, it had all binaries in there for it. It was still the plug-in system, still had RPC, but it was all in a single binary. That worked really good, except we ended up with what we call version sprawl. Starting with version 7, our version numbers started to get really out there. We had like ... I think version 7 had like 14 releases. Version 8 probably had a similar number. Version 9, I think went up to 0.9.11. What we found was that, as resources were being added, demand for having those many resources was high, demand for getting fixes and additional features in these resources out was really high. We ended up releasing about every two weeks, because the Amazon provider alone would have 50+ entries in the change log just for that provider, because there was so much movement. We were getting tons of new providers, new resources in all of them, we just had to keep releasing to minimize how much we were actually throwing out there.

As an operator, I would imagine you probably weren't very happy upgrading the core library every two weeks, to keep up with some of your fixes. Yeah, this is what I was saying. Everything was in a single repository and they were tightly coupled. We were having releases like every two weeks, so yeah, version 6.16., 7.13, 9.11. With version 10 we changed that, graphics. We actually split that, we took all the binaries that were in built in, and we put them in their own GIThub repository. We ripped em out, and shoved em over there. We changed Terraform, so that it'll actually dynamically look for those providers. When you download version 10 of Terraform, you have no providers. Well, one, the Terraform provider, and maybe a null provider, but basically you have none. You now have a command called Terraform init, which will go and dynamically pull the providers you need for you.

I don't know what my next slide is, okay. What we've done here is, by splitting them, all these providers now have their own separate release versions, and release cadence. Which is great, because Terraform core ...

So, we've separated cores and we've separated providers ... Lost all my momentum. Okay, this does a couple things, one, core features take a lot longer to implement. They usually takes weeks and weeks to plan, maybe even longer to actually write, because they deal with a core, the graphing, which is the pinnacle of what Terraform has. Providers move a lot faster. Adding a resource, or a new provider itself, can be done in a period of hours. When Google or Amazon comes out with this new press release of how they added a feature, or a new resource, we could have a Terraform resource for that as soon as the SDK's are updated. Like, we have community members and employees who are just on it, trying to get those things in. Providers need to be able to release separately. With this split, we've enabled that.

Terraform providers can now move at their own pace, be released independently. Terraform's users can now dynamically pull down the providers they need, per project dependency management, meaning one your projects can be on a newer version or a different version of our provider than another, based on your needs. You could be locked into one for some reason, so you have version locking for those things. We wanna do this to give also minimum change to operators. Instead of operating Terraform core all the time, you can just upgrade the plug-ins. You can upgrade them as they're released, or each project can have new ones. Let's see. Nope, that's a bad title. I wanted to skip this slide, but I didn't.

All right. That went kinda fast, but I think we're gonna be okay...

Extensibility

Talking about extensibility in Terraform, I meant a couple things there. One, Terraform itself is extendable, because writing providers and writing resources is really, really simple. It's really, really easy to do once you get the hang of it. I say that, I've been doing it for three years. You don't actually have to know much about core at all. As I showed earlier, all you need to do is tie into the plug-in, the Terraform plug-in, and helper schema framework there that gives you the plug-in architecture, and you can run your resource. What we're gonna do here ... Demo, provision, secure, connect, run any infrastructure.

Demo time

In my demo, I'm going to set up a Heroku app, I'm gonna set up a Lambda function, and I'm gonna get this to talk. How I'm gonna get them to talk is kind of more interesting than the other two. Again, in extensibility, I mentioned earlier, data sources let you consume providers that you might not necessarily have a lot of direct control on. Some providers just really don't give you the API to do that. I think I'm ready to exit now. I don't know what my next slide is, so we're just gonna do that.

Oh, yeah, I wanted to talk about this one. Disclaimers. I already wrote the Heroku app ahead of time. I wrote the Lambda function ahead of time. A lot of this was actually already written ahead of time. So, I'm not gonna do a lot of live coding necessarily, but I am gonna touch live code and compile things. The code you are going to see is proof of concept, this is not the quality that I would ship normally, but I wanted to get this demonstration working. Live demos are always kinda crazy. As Kate mentioned, I have a button, thanks Kate, and I've had a very painful experience with the button today, because wifi connectivity has not been so great. So, I can't tell you for sure if the buttons gonna work. All right, on to demos.

Can everyone see that okay? All right. Here we go. Nope, that's not what I wanted. What do I have? Okay, so, Terraform plan. Hold on, I missed something, as demos go. So, Terraform plan, this is Terraform version 10. Plan with just a Heroku file says, "I don't know what you wanna do, because you haven't ran Terraform init." Terraform init is a command that you will run often, it's safe to do so, and you need to start your project with it. Terraform init, it will look at my configuration file, which right now is just this Heroku app here. The provider is empty, most of the providers are gonna be empty because they're configured authentication information from my environment. That's where I've kept those so I don't show them on screen. We've defined just a very simple Heroku app.

The Heroku provider does not have a means of uploading a code. Terraform apply. This is just kinda basic set up stuff right now. Terraform show. All right. I created an app, but there's nothing there. I can do Heroku open ... oh, yeah, duh. Okay, so heres the actual repository. So, git, remote, set-url ... I guess it's the same as when I did it earlier. I've done this demo a couple times now. Git, push, Heroku master, I'm gonna shove up the code. It's a very, very simple Go application. You're not gonna be impressed. Apparently I'm inside a field. All right. Heroku open. Nothing to see here, excellent. All right, so we did that. The next I'm gonna do is, set up a Lambda function, right?

So, Lambda, okay, all right. This is my Lambda function, or the Terraform definition of it. Right here, this is the important part, the name is index, its file name is index.zip, it's a Node.js application and it's already zipped up for us. Environment variables, and then this stuff is roles, they're just part of Lambda. So, again, Terraform plan, is going to fail, because it says I don't have Amazon. I've changed my providers, or I've added a new one, so I need to do Terraform init again. It discovers that I need the Amazon provider. Now, a quick little detour. Providers, as I mentioned earlier, they're scoped by version. I'm not specifying a version, so it's grabbing the latest ones, and it's kept per project. They're actually stored in a folder right there called Terraform. You can see that under plug-ins, under Darwin, the lock files gets the has, then it has the actual binaries in there. We can go and execute those if we wanted to, but we're not going to do that.

Okay, so. Those are the two kind of boring parts of the presentation, hopefully those are the boring parts. I mentioned earlier extensibility, and working with providers that you can't necessarily create or control completely. I wanted to use some data sources and try to get these two things to talk together. The next I'm gonna do it write a provider, and by write, I mean compile it, cause I've already written it. This provider is for a streaming service called Twitch. Twitch.tv is a service where you can go and stream your laptop, whether you're giving a talk, or playing video games probably. I wanted to create a provider there, but Twitch API doesn't literally let me create a lot there, it's a lot of consuming. What I'm gonna do is, I've got a channel set up, and I want to create a provider for it.

Here we go. Switching directories really quickly. I'm not in a directory called Terraform provider Twitch. I'm looking here at the file structure, and we see our main.go. Everyone see that okay still? Okay. So, there we go. Main.go, provider function is the Twitch. provider. Going here, into the directory, we see that provider. The provider schema there, we define the schema, it needs an API key, by default it will read it from the environment, and it's got a description, and then I define my map here, so Y2 here. The API for Twitch, the channels require an ID, last I looked, you can't look up by a channel name, you need an actual internal ID. The Twitch user data source uses the token I have, and without any other input, it will find my channel for the person that matches the API key.

The plan there, if we go and look here, is to use the interpolation here. We say data source, twitch user, me. I don't give it any attributes, because I'm not looking for a specific person. I want you to find the person based on the token, and that's an implementation detail I'll show you in the actual code there. Then the channel, I want to actually use the result of me, I want to use that ID and filter the channels by that ID. This is an example of the interpolation. When Terraform reads this, it knows it has to do the user, twitch user first, before it can do the Twitch user channel, because it needs that value. There are just an example of outputs. Those will just show the output, I'm not gonna run that right now though.

Back to this. If we look at user, it's very simple. We just say get user, and when we get it, we just set the id in the name. It's a data source, so it only has the single method. Then we had channel, which is also not very impressive. Channel ID, display name, URL, we're gonna use the URL here. I go and I read the channel ID that's given to me from Terraform, I get it and then I set those attributes. Now, they're useful data sources. All right. Now, because this is a binary, all I have to do is, go install. What that will do is, that will compile it, and then that will put those resulting binary in my path, in my go bin. All right, so there the original Terraform, and now you see one side by side there, which is Terraform provider Twitch. Now, we can actually use Twitch.

All right. I'll just leave those there for now. Now, I've got provider Twitch, Twitch channel, and I've got those outputs. So, Terraform init, not found. Well, spell it correctly and then it'll work. All right, so it doesn't pull that in because it's already in the path. But, I can do Terraform plan, and it'll read that thing, it actually won't show anything here ... Oh, did I not spin up the, oh I never did the, I never spin up the Lambda function, my fault. What normally, when I practiced the demo earlier, was that it wouldn't do anything, because data sources themselves are basically item potent if you don't have them going to something else, so Terraform plan won't show anything, cause there's nothing to do that, all it is gonna do is read.

Right now we're uploading a binary, and here we see our outputs for Twitch. Let's do something with that. Instead of this we want Twitch. Channel, data.twitch, what'd I call it, me? Mine. I wanna do that, okay. Now what I've done incrementally, I have a Heroku app, I have a Lambda function, I'm gonna send it, I'm gonna update it's environment variables so they actually know where to go and how to talk to each other. Our Heroku app, has nothing to see there. When I do Terraform plan, we're going to see that. I have now taken the Twitch data source, the channel, and I'm using it's value and putting that into the Lambda function, and the Heroku app. So, Terraform apply.

Here, I've extended Terraform to kind of adopt a plot provider, that I don't have a lot of control or access to, but is still actually an important part of my infrastructure. Just because I can't create and completely manage it, doesn't mean I can't actually use it and integrate my other existing infrastructure with it. If you go here, it's now going to load that, super. Okay, so this is the part that gets less than super functional. Let's see, Lambda. We're gonna test this just to make sure everything works. This is my Lambda function, you can see my environment variables there, and I'm gonna call test, and it's going to work. It worked, hurray.

What I did there was, the Lambda function connects via IRC, it uses the channel name that I gave it in the environment variable, that we got from the data source, and then it actually connects and sends a message over the wire. The next thing I wanted to do, was try to cross over a boundary and use one of these cool little IoT buttons. If you don't know what these are, they're a little button from Amazon, you can go and configure to trigger all sorts of things, SNS. Here, I'm gonna use this to trigger a Lambda function, in theory. Now, I have to give more caveats here. In order to set this up, you need a certificate, and you need to enter ... I think you need to get the serial number out of it, so I had to do some manual set up. Those are physical things that had to be done, like I had actually connect to this on it's own wireless, and upload a certificate to it. Those are things that Terraform really can't do, yet. We're still adding new features.

We want to, what did I call it? What did I call it? Ah, yeah, wait. There we go. Without covering to much about how the button architecture works, you need to set up rules for it. What I'm doing here is, I want to with IoT, I wanna say when a click happens, basically I'm gonna subscribe to this SQL statement here, and that's the actual serial number of my button. Don't go stealing my button, my button. I'm gonna tell it that when the click comes through, I want you to trigger this Lambda function. So, we do Terraform apply. I was told once that you're not supposed to show things that fail. You do Terraform apply and it says, "Aww, this doesn't work. I don't know what IoT rule is." Of course, we go to our checkout of the Terraform provider and we add it.

This is a new resource for Terraform. It is the IoT topic rule. I'm going to give it a rule name, the rule arm will be calculated for me, I'll pull that down, I'm gonna tell it what Lambda function I want it to talk to and the SQL that it needs to look up, and this is an implementation of a resource. It is really, really simple. Create, I just grab a couple values from my configuration, send it off, check the error, and then I set the ID. Same with read, I just read it, get some of the values I already expect, and just set the state. Delete, just tell it to delete itself.

Let's go, install. Now, when that finishes, a custom build of the Amazon provider is gonna have support for this, but not this one. I'm still on Amazon version 0.1. What I'm gonna do is delete .Terraform, plug-ins, Darwin, Amazon, bye-bye. We just had the Heroku one, but I'm gonna do Terraform init, and you see I still just have the Heroku one, I still just have that one there locally. But, because the Amazon one is in my path, it's now going to use that Amazon provider, I'll show you that, yeah. Now, I've got the custom Amazon one there. So, Terraform plan, and we see, all right. I'm gonna add this rule. For your button I'm now gonna trigger a Lambda function. I have a minute and thirty seconds for this to work. All right, so Terraform apply, go.

The sad, sad truth. I'm gonna click this button and it's not gonna work. I'm gonna click the button and it's not gonna work because Terraform is at the mercy of the provider API's it's given. Despite my GIThub issue that I filed, I think yesterday, you cannot actually invoke functions like this without manually adding a trigger. So, I made a rule that says whenever the button is clicked, I want the action to get forwarded to this Lambda function. But, I need to make a matching rule on the other side, that says that this function can be triggered by an IoT button.

I click IoT ... Sorry, that went really fast. I'm gonna add a trigger, say IoT, custom IoT rule, and we see my rule. That's the one I made just a moment ago, and that's the query I use, so I'm gonna say submit. Now, again, the internet has not been the most generous to me. We're gonna try this, it's actually set up to tether through my phone. We're gonna see if this works. Plug, get an IP address. It blinks, it has to reconnect to the wifi every time. It's blinking, and it said green. In theory, I just pushed a button, which should in theory, send this. It should happen in the next like 10 seconds. It unfortunately takes some time. Hey, it happened twice. I am resisting comments about JavaScript. Which, by saying that, I'm not.

All right, that's my demo. We had a lot cooler plans for the button, but then that whole trigger thing really, really threw me off. I tried desperately to try to work around that, but it didn't work.

So, yeah. What did we do? We set up a Heroku app, which I wrote before. We set up a Lambda function, which I wrote before and learned about JavaScript. We wrote an entire new provider for Twitch, and we exposed two new data sources. Now, you can leverage Twitch. You can pull information outta there, can't really create things, but you can still integrate with it, because init has an API, Terraform can do something with it. We extended an existing provider. We took a provide that comes off the shelf, that's maintained by HashiCorp, and we added things to it, and we didn't have to wait for it to come upstream, we added it ourselves, compiled our own version, you can cross compile that, you can distribute that yourself, use it however you want.

Oh, it's counting up, I'm over. That's why it's counting up, okay. Yeah, so we extended an existing provider, added a new resource, did it locally, and I clicked a button, and it actually worked. Worked way better than earlier when I was demonstrated it, I was waiting like 30 seconds, and like sweating the whole time. Now I'm done. Thank you.

More resources like this one

2/3/2023Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

1/5/2023Case Study

How Discover Manages 2000+ Terraform Enterprise Workspaces

zero-trust
12/13/2022White Paper

A Field Guide to Zero Trust Security in the Public Sector

9/26/2022Case Study

How Deutsche Bank onboarded to Google Cloud w/ Terraform