Dynamically manage and maintain multiple Terraform environment lifecycles with Jenkins.

Natan Bensoussan
Israeli Tech Radar
Published in
11 min readMar 23, 2020

--

TL;TR

Jenkins can be used to create maintain and destroy multiple AWS infrastructures environments using Terraform code.

  1. Jenkins is run with the following parameters:
  • AWS region
  • Environment name
  • Action (Plan /Apply / Destroy)
  • Profile (AWS credentials to be used)
  • Email

2. Jenkins fetch the Terraform code from the git repository.

3. Jenkins create, maintain or destroy the environment from the parameter using the Terraform code according to the action required.

In this post, we will demonstrate how multiple Terraform lifecycle infrastructures can be easily and dynamically managed and maintained per environment with Jenkins.

Terraform:

Terraform introduces infrastructure as code and provides the ability to create, update and destroy any kind of component and environment infrastructures in the cloud. Once provided with the credentials, region, and the required resources, it will be able to create the requested infrastructure.

As Terraform introduces infrastructure as code, obviously its best practice would be to maintain it in Git. In this post it is assumed the Terraform is in git. The example of course is taken from the repository:

https://github.com/natanbs/Jenkins-Terraform

Basic Terraform concepts are explained to understand the project’s workflow:

Terraform env:

An infrastructure created by terraform code. In this project it is just one VPC and a couple of EC2s. But it could be a very complex infrastructure of many VPCs, EC2s,RDS, load-balancing, peering, transit gateway, route53 etc.

This project comes in handy in the scenario where the same infrastructure needs to be created and maintained multiple times in isolated environments.

Terraform init:

Prior to running the code, terraform checks for all the required components needed for the implementation, for example: AWS client (It will use its own binaries independent of the OS). Also any 3rd party modules like key-pair used in this project is downloaded ands installed.

The required components are installed in the folder “.terraform”. If any new modules are added or updated that require new installations, terraform init would have to be run again.

In this post the terraform init is configured to run prior to any scenario (plan, apply and destroy) in order to confirm the job is running with the updated terraform configuration. This allows multiple admins to maintain the env: In case any other admin made changed to the env, the terraform init confirms your env is properly set.

Terraform Plan:

Terraform reads all the requirements from the code and compare with the reality in the cloud (AWS in this case) and would provide a report of the plan. If it is a new env, it would most like to create from scratch everything from the code. Once the env was created and a new resource was added or deleted in the code, then terraform will not create the env from scratch. The terraform plan will provide a report of the changes and which components will be added or removed according to the comparison between the requirements in the code the the existing infrastructure in the cloud.

Terraform state:

The terraform state is the core of terraform logic. Once terraform compares the requirements in the code vs reality in the cloud, it creates a tfstate file. The tfstate file is critical as it contains the current state of the infrastructure. Without this file, or in case this file (and its backup) is corrupted, terraform will not be able to proceed.

Cloud bucket:

So having the tfstate file locally may be sufficient as long as you have one admin. But what if there are more admins? They will not be able to use the local tfstate file of the first admin, hence they will not be able to make any changes to the env.

Terraform supports using buckets (S3 in AWS) to maintain the tfstate file in the cloud which allows each admin to access. Terraform would maintain the tfstate in a dedicated Database (DynamoDB in AWS) to manage the tfstate lock, which will not allow any admin to run Terraform to an env that is currently being handled. Any admin would be able to maintain the env as long as the tfstate file is not locked.

Terraform workspaces.

So having the tfstate in the cloud is great if you have one env, for example a development infrastructure. A second env would require a separate set of code in another folder and another bucket in the cloud. Not because you can’t use variables, but because they cannot share the tfstate file.

The solution would be to use the Terraform workspaces which would locally create a “terraform.tfstate.d” folder and each env would have its own folder where its tfstate file is maintained.

Similarly in the cloud, there would be an “env:” folder in the bucket with a folder per env where its tfstate file would be held. The cloud DB (DynamoDB in AWS) would have a table for the project with one item per env. Terraform workspaces allows to easily switch between envs on the fly, which will allow in our project to run the same job on different envs in different states per demand.

Getting to business:

In this post we will use AWS, create a VPC with a couple of servers.

Each Jenkins job would be able create, apply or destroy the env’s vpc and its dependencies.

Jenkins would clone the repository of the Terraform infrastructure code of the project.

The key-pairs and the terraform’s apply output report are stored in the Jenkins artifacts for reference.

For this post the free AWS account can be used. The EC2s are created with the free t2.micro servers.

Requirements:

- Basic knowledge of Terraform, Jenkins and AWS scripting.

- AWS cli and credentials

- Terraform cli

- Jenkins plugins:

The project’s terraform file structure includes 3 folders as can be seen bellow:

- base: Initialise the project- Declare the profile, region, operators, bucket and DynamoDB table.

  • Needs to be applied once before creating the envs. When changing between the envs, the base init should be performed.
  • The base uses the “modules”, “backend” and “main” modules.

- main: Created the env- VPC, Security groups, a couple of EC2s (and their key-pairs), and any other components included in the Terraform code.

- modules: Implement the base initialisation.

Jenkins would create, update or destroy any infrastructure created with Terraform. In this example we used the basic of a couple of EC2s.

Bellow is the project’s full structure which Jenkins takes from the git repository:

git repository

https://github.com/natanbs/Jenkins-Terraform

The procedure to run the terraform code:

Set your AWS credentials:

$ cat ~/.aws/credentials
Example output:
[tikal]
aws_access_key_id = ***************OUEU
aws_secret_access_key = ********************************i8S8

$ aws configure
Example output:
AWS Access Key ID [****************OUEU]:
AWS Secret Access Key [****************i8S8]:
Default region name [eu-central-1]:
Default output format [text]:

Initiate the project:

The base init stage will prepare the S3 bucket

$ cd base
$ terraform init
$ terraform plan
$ terraform apply

The env’s lifecycle - init, plan, apply, destroy:

Created components:

Once terraform is applied, you will be able to find in the AWS console all the components created:

  • S3 bucket (under env: you will find a folder per environment which will include the env’s tfstate file)
  • Dynamodb table (which manages and maintains the tfstate files)
  • VPC
  • Subnets — two private and two public
  • Route tables
  • Internet Gateway
  • Network ACLs
  • Security Groups
  • EC2s — Two servers (server1 and server 2 in this example)
  • Key-pairs
  • Network instances
  • Any other components created by the Terraform code.

With the cloned git of the Terraform code, an AWS account with the AWS credentials set, the env can be instantly create in AWS.

Managing a couple or a few such envs would be easy. What happens there are tens or more of them?

Jenkins:

Required Jenkins plugin:

  • CloudBees AWS Credentials
  • Terraform

Terraform plugin installation:
Manage Jenkins > Global Tool Configuration > Terraform

Setting the Terraform plugin:
Terraform Name: terraform-0.12.20.
Version: Terraform 0.12.20 linux (amd64) — Notice the OS and platform.

Node installation

Create a permanent node:

Manage Jenkins > Manage Nodes > New Node

This agent is configured with ssh launch method, however any preferred method can be used.

Add a jenkins user and paste its public key:

We are ready to proceed to set the job:

Job settings

Create a Pipeline job:

Select:

To add parameters, Select:

  • This project is parameterized
    Add choice parameter: AWS_REGION and add your regions (can have one or multiple).

Add string parameter: ENV_NAME - This will represent the environment/workspace/customer.
Add choice parameter: ACTION and add ‘plan’, ‘apply’ and ‘destroy’ — These are the actions Jenkins will trigger Terraform.

Add string parameter: PROFILE which is the AWS credential profile.
Add string parameter: EMAIL with the emails or mailing list to the admins.

Set the Git for the Pipeline:
Pipeline definition: Pipeline script from SCM
SCM: Git
Repositories:
Repository URL: git@github.com:natanbs/Jenkins-Terraform.git
Credentials: jenkins’s user.

Script Path: Jenkinsfile

Setting AWS Credentials

To configure the AWS Credentials:

  • Install the CloudBees AWS Credentials plugin
  • Settings: Manage Jenkins > Configure System > Global properties

Email settings

To set the email notification parameters:

  • Settings: Manage Jenkins > Configure System > Extended E-mail Notification

The Pipeline

Jenkinsfile — Declarative Pipelines

The declarative pipeline is a relative new Jenkins features using the Jenkinsfile to supports Pipeline as code concept based on groovy. Anyhow, basic scripting knowledge is sufficient to understand the script.

The Jenkinsfile code is composed of the following major contexts:

  • The terraform command function
  • Pipeline settings: Selected slave and job’s parameters to run.
  • Checkout & Environment Preparations: AWS access and Terraform settings.
  • Actions: Terraform plan, apply and destroy
  • Email notification

The full Jenkinsfile can be found in the git repository above.

https://github.com/natanbs/Jenkins-Terraform/blob/master/Jenkinsfile

The major context parts will be elaborated below:

The terraform command function

The following tfCmd command is the terraform command that is executed in each action: The tfCmd parameters are ‘command’ and ‘options’:

  • command: The terraform action (init/plan/apply/destroy)
  • options: The terraform required options like the output file etc.

Note: Do not confuse between Jenkins and Terraform workspaces:

  • Jenkins workspace is the Jenkins directory where the job is running.
  • Terraform workspace is the environment created.

Since the jobs can be run each time for a different env, following parameters are set every time the terraform command is run (tfCmd).

  • WORKSPACE - Jenkins workspace
  • ENV_NAME - Terraform workspace (env) each time the command is run, jenkins ($WORKSPACE) and terraform ($ENV_NAME) workspace are selected and “terraform init” is run in the base and main folders.
  • ACCESS - The AWS credentials profile to be used, taken from the PROFILE parameter when running the build and it’s settings shown above.

Terraform init - Performed both in the base ands main directories with each run to make sure the env is updated sine the job can be run by other admins on other slaves. Environment - The environment (Terraform workspace) that will be created (qa/dev/prod etc or customer name or any type of env). Terraform show - Is perform after each command and outputs the current state to a file. This file is saved as an artifact if the action was ‘apply’.

Pipeline settings

agent - The slave’s label.

Parameters

  • AWS_REGION (choice): The job supports multiple AWS regions.
  • ENV_NAME (string): The created env (Terraform workspace).
  • ACTION (choice): The terraform action to perform (plan/apply/destroy)
  • EMAIL (string): Emails or mailing lists for notification.

Checkout & Environment Preparations

AWS Access credentials:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • credentialsId: Taken from Jenkins Credentials:

Setting Terraform using the AWS credentials:

  • tool name: ‘terraform-0.12.20 — Terraform version from the terraform plugin configuration above.
  • aws configure: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, PROFILE

The Terraform plugin installation using the AWS credentials settings:

Actions

Action ‘plan’:

To run only when the action ‘plan’ or ‘apply’ are selected. Credentials are set as above. If ‘apply’ is selected, it will previously run a ‘plan’ and will use its tfplan output file.

Action ‘plan’ command: Will run with the command: ‘plan’ and options:

‘-detailed-exitcode -out=tfplan’.

It will create a tfplan file to be used by the ‘apply’.

Action ‘apply’:

To run only when action ‘apply’ is selected. Credentials are set as above.

Action ‘apply’ command: Simple command: will run: terraform apply tfplan

Artifacts: Once action ‘apply’ is performed and the env is created, the following artifacts will be saved:

  • Key pairs of the EC2s
  • Output report of the current state

Action ‘destroy’:

To run only when action ‘destroy’ is selected. If action ‘destroy’ is selected, a confirmation will be prompt to confirm the deletion of the env. Credentials are set as above.

Action ‘destroy’ command will run: terraform destroy -auto-approve

The -auto-approve option will prevent any prompting.

Email notification

Once the job is complete, a notification email is sent with the following details:

  • Env (Terraform workspace)
  • Job name and number
  • Action performed
  • Region

--

--