Infrastructure as Code: A Peek into Power of Terraform

Today, large enterprises are using multiple cloud providers and technologies. Enterprises dealing with distributed loads on their on-prem infrastructure, private clouds, and public clouds need teams that are versed in writing different CLI commands and scripts in multiple languages and require specialists in these fields. 

Unfortunately, all of this bears a hidden cost since the company has to train its employees in multiple technology stacks to manage different infrastructures. Moreover, writing the CLI scripts for these tasks over and over again is a resource-consuming and time-consuming process. 

Enter Terraform, which provides a solution to this problem. 

What is Terraform?

Imagine Terraform as the single source of truth for your infrastructure scripts across multiple environments. Terraform scripts can scale across multiple clouds, like Google Cloud, AWS, and Azure, and on-premises infrastructure, too. You can utilize this power of Terraform to save time and write readable code that represents infrastructure. This can be organized in a single project or a single file as required. 

The infrastructure journey for Terraform involves writing the code for management, making sure the code being executed by Terraform will have the desired outcome, and finally applying the changes to the infrastructure for this code.

The extensibility of Terraform can be observed by the use of providers. These providers can be made for any number of existing cloud providers and new cloud providers yet to come into the market. On top of this, you can write your provider code to interact with the Terraform API and use/deploy this to the Terraform library. 

Many giant clouds, like Google Cloud, have started providing Terraform built into their online CLIs. As a result, customers can leverage the power of Terraform while utilizing their infrastructure. 

Using Terraform to manage system infrastructure

This guide gives you insight into the power of Terraform to manage a basic infrastructure on Minikube. Installation and setup for the Terraform and the project will also be examined. Additionally, we discuss the Terraform code to handle Kubernetes deployment and add Helm charts to the repository followed by use cases of Terraform in the cloud. The guide concludes by describing the next steps you can take in your infrastructure as code (IaC) journey.

At a high level, here are the main topics covered in this guide:

  1. Overview of Kubernetes and Docker Infrastructure
  2. Terraform syntax and setup
  3. Infrastructure management using Terraform
  4. Cloud use cases for Terraform
  5. Next steps

Overview of Kubernetes and Docker infrastructure

The guide will deploy an existing open source project’s image to a Kubernetes environment. However, you can use any executable image from the Docker Hub or open container registry instead of the link being used. A prerequisite for following this tutorial is having Docker, Minikube, and Helm installed on your local machine. If you don’t have these installed, the guide will share resources you can utilize to get them running. If you need some help, learn how to deploy a React app to Kubernetes using Docker

First, we’ll pull and run the Docker image to our local machine and test whether the image is working or not. After verifying this, we’ll create a Kubernetes deployment for a public image and scale the deployment to have four replicas on Minikube. We’ll also install MySQL as the database using Helm charts and discuss the scripts for this using the Helm provider. This is because the container being used might require MySQL for running in a big capacity. Finally, we will write a termination script to delete the replicas of the application to free up system resources. 

Remember, Terraform is also an excellent tool that large enterprises use for retiring their existing infrastructure, which can scale across regions. A single Terraform script has the power to address the infrastructure changes, review the changes being made, and interact with low-level and high-level components to get the desired changes into effect. 

Let’s get the Kubernetes environment up and running using Minikube. If you don’t have it installed on your local machine, follow this link to install Minikube.

$ minikube start

This starts a single node Kubernetes cluster running on a backend. In this case, a Docker-based backend is being used to run Minikube. However, you can use a virtual machine or Podman-based backend, too. 

This command gives us access to kubectl, which is the command line interface to connect and tell the Kubernetes API server about the requests. The Kubernetes API server then instructs the control plane to execute the changes using internal contracts. Kubectl works by referring to the Kubernetes configuration file. In our local system, this configuration file is stored in .kube/config. As we are moving to provide a single source of truth for Docker, Kubernetes, and Helm charts, we’ll be using Terraform that references this Kubernetes configuration. 

The next step is to install the Helm on your local machine. To do that, follow this guide and move to the next section.

Terraform syntax and setup

Install Terraform using brew if you’re using a Mac device. Follow this guide to install Terraform for other devices.

$ brew install terraform 

The code presented in this guide can be found here. Or, you can follow the next steps to create these files yourself.

Let’s start by writing a Terraform script to pull a Docker image and run it. This will introduce us to basic Terraform concepts. 

Start by creating two files: and As we will be working with Docker, we will need to specify the provider address for Terraform. Providers can be seen as the main implementers for the underlying infrastructure that we need, which in the current case is Docker. We need the latest stable version of the Docker provider, so add the following in file. 
The provider list can be found here. We’ll be keeping the main providers required in the file and the functionality related to the providers in subfiles with the name of the provider. Add the following snippet to the file:

terraform {
 required_providers {
   docker = {
     source  = "kreuzwerker/docker"

Next, we tell Terraform to start using the Docker provider using the default Docker socket. To do that, add the following snippet in the file: 

provider "docker" {
 host = "unix:///var/run/docker.sock"

We want to first pull a Docker image from the Docker Hub and then create a Docker container. The Terraform command first requires the type of the command which can be of type data or resource. This is followed by the data or resource type corresponding to the provider being used. Finally, the name is specified after this.

First, you need to clone shadowshotx/product-go-micro on your computer and use the following command in the directory to build it to a local Docker image:

$ docker build . -t shadowshotx/product-go-micro:latest

Now that you’ve created the image, you can reference it in the Terraform configuration. The below commands fetch the Docker image and then spin up a Docker container with the name my-site. In Terraform, the sub-objects are specified with dots. Append the following snippet in file:

 data "docker_image" "my-site" {
 name = "shadowshotx/product-go-micro:latest"
resource "docker_container" "my-site" {
 image =
 name  = "my-site"

Infrastructure management using Terraform

Next, we move on to have a Kubernetes environment. This is the representation of infrastructure management. In public cloud platforms, the Kubernetes environments are an abstraction of base services on the infrastructure. In this case, your local machine is running Kubernetes and lending memory and compute power to the pod applications. 

So, we tell Terraform to add the Kubernetes provider as the required provider. Add the following snippet to in required_providers:

kubernetes = {
     source  = "hashicorp/kubernetes"

For the Terraform Kubernetes provider, we specify the path of configuration for Minikube and set the context of the provider as Minikube. Next, we want to have a resource of type deployment in the default namespace. We specify the name for the deployment in the metadata section. In the spec section, we specify the number of replicas and add a selector to this. 

It’s essential to have labels and selectors here. Otherwise, Terraform won’t be able to handle the resource communication to the Kubernetes API server. In the template section, we define another spec section which has the details of which image to use and what port to expose the pod on. This resembles the default pod definition in Kubernetes, which is specified in YAML or JSON format. To represent all of this, add the following snippet in the file:

provider "kubernetes" {
 config_path = "~/.kube/config"
 config_context = "minikube"
resource "kubernetes_deployment" "default" {
 metadata {
   name      = "product-go-micro"
   namespace = "default"
 spec {
   replicas = 4
   selector {
     match_labels = {
       admin = "shadowshotx"
   template {
     metadata {
       labels = {
         admin = "shadowshotx"
     spec {
       container {
         image = "shadowshotx/product-go-micro"
         name  = "product-go-micro"
         port {
           container_port = 80

Further, we want to add a Kubernetes add-on in the form of a Helm chart. The Helm charts are pre-packaged software for Kubernetes use cases. In this case, we’re deploying MySQL, provided by Bitnami. In the providers, we just need to share the address of the Kubernetes configuration and then specify the resource we want to create. In this case, we want the helm-release of the name mysql-release. We then specify the repository and the chart name referencing the Helm chart to be installed. Add the following in a new file 

provider "helm" {
 kubernetes {
   config_path = "~/.kube/config"
resource "helm_release" "mysql-release" {
 name       = "mysql-release"
 repository = ""
 chart      = "mysql"

Finally, it’s time to deploy the changes in the Terraform file we just wrote. Initialize Terraform in your local machine by running the following command in the same directory. This command installs the providers required by Terraform to execute the infrastructure changes:

$ terraform init

Terraform also has a very useful feature where we can see the changes about to be done. The below command helps to understand the current state of the system and the actions Terraform will take to bring the current state to the desired state:

$ terraform plan

If the plan resonates with the code you’ve written, you can apply the changes to the infrastructure.

$ terraform apply

After answering “yes,” you’ll get an output like this:

To check that the changes have taken place, run the following command:

$ kubectl get pods

As we can see, there are four replicas up and running for the desired image. This means the Terraform is working and changes have taken place. 

You can see the Docker containers with:

$ docker container ls

Finally, clear the Terraform deployments to free up your system:

$ terraform destroy

Cloud use cases for Terraform

The main objective of the above project was to give you an insight into the power of Terraform. With a single language, we were able to handle three different tech stacks without needing to know the syntax for all of these differently. We were able to deploy a Docker container, a Kubernetes deployment, and a Helm release using a single framework. This section gives insights into some of the common use cases for Terraform and how this is being used in the real world.

There are other use cases where Terraform is used as a baseline strategy in the cloud. The project we just performed was on your local machine and you were using providers for different technology. However, large enterprises use public cloud environments that have their providers. Examples include Google Cloud, AWS, and Azure. Just like we have different resources and data types for each provider (e.g., docker_container for Docker provider), the public cloud lists their services as different Terraform resources. You can leverage this power to keep track of infrastructure deployment in code format. 

As you can implement custom providers, enterprises having on-prem infrastructure can leverage multi-cloud capabilities by creating and handling Terraform scripts. The functionality and styling across these platforms will remain constant, giving engineering resources to implement best practices and abstract code in a meaningful way. If Terraform resources are organized in an orderly fashion, they can be of huge help to the engineering teams.

Terraform can also help you automate the infrastructure required for your performance or behavior-driven tests by attaching the scripts to CI/CD pipelines. This can be seen as a test bed setup for the resources where your application will run. Such technologies are crucial when developing applications that need to be highly reliable and constantly available.

Many enterprises have different environments where their application centers and these may be qualitative, pre-production, and production environments. For the app to run, we need to have different versions in all of these and pre-provision infrastructure. 

Creating this manually is a complex task as multiple redundancy commands need to be executed. And if a small instruction in the chain of command is missing, the infrastructure may behave unexpectedly. Keeping the infrastructure in the form of Terraform code enables you to periodically review it before deploying and also dynamically spin new environments from templates. This makes the setup more reliable.

Finally, for applications with complex setups, Terraform scripts can be kept as a base setup executer. This may work as a bootstrap script the users might run on their local to get a sample application up and running. An example of this is the above application. The project can be seen as the boiler code running which deploys the application and installs the relative dependencies.

Building forward

This guide presented a demo project which demonstrates the power of Terraform. 

To continue your learning, you could research more Terraform-based providers and play with complex infrastructure if your local machine allows it. You may also be ready to start experimenting with Terraform on GCP or AWS. 

If you like engineering content like this, browse the Mattermost library and learn more about working with cutting-edge technologies.

This blog post was created as part of the Mattermost Community Writing Program and is published under the CC BY-NC-SA 4.0 license. To learn more about the Mattermost Community Writing Program, check this out.