Deploy AWS resources & Jenkins Pipeline using Terraform

Sameed Uddin Mohammed
10 min readAug 3, 2021

What is Jenkins?

Jenkins to the rescue! As a Continuous Integration tool, Jenkins allows seamless, ongoing development, testing, and deployment of newly created code. Continuous Integration is a process wherein developers commit changes to source code from a shared repository, and all the changes to the source code are built continuously. This can occur multiple times daily. Each commit is continuously monitored by the CI Server, increasing the efficiency of code builds and verification. This removes the testers’ burdens, permitting quicker integration and fewer wasted resources.

What is Terraform?

Terraform is a (Infra-as-code) tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. on various platforms.

How Terraform Works

Terraform allows infrastructure to be expressed as code in a simple, human readable language called HCL (HashiCorp Configuration Language). It reads configuration files and provides an execution plan of changes, which can be reviewed for safety and then applied and provisioned.

Extensible providers allow Terraform to manage a broad range of resources, including IaaS, PaaS, SaaS, and hardware services.

With that being said : How are we going to do it?

Part 1:- Lets go through the Prerequisites:

  • Create an AWS Account
  • Setup AWS User with Admin Role & permissions in IAM
  • Install AWS CLI on your PC & setup Access & Private Access key for ease of access & later you dont have to configure AWS credentials in Terraform.
  • Install Terraform
  • Install Git
  • Install VisualStudioCode (one of the best IDE tool)

Our Goal for this Project Labs:

We are going to create 3 things:

  • VPC with CIDR 10.0.0.0/16
  • 2 subnets (public) with CIDR 10.0.1.0/24 and 10.0.2.0/24
  • An autoscaling group with Amazon Linux 2 ec2 instance (t3.small or t3a.small) with minimum of 2 instances and max of 3

This kind of infrastructure usually takes longer time but we have a benefit using IaC (Infra-as-code tool) is Terraform to deploy our resources in much less time with our code.

Part 2:- Time to Build:-

  • Perform all the Prerequisites open Visual Studio Code
  • I have Github repo here which you can clone or fork it using the following command :$ git clone [forked repository url]
  • Make sure to update Jenkinsfile and line 24 to your forked from the repository URL.
jenkinsfile

We need a Backend & Why? (Optional for this project)

According to Terraform best practices its best make your state file as a secret and store it remotely. This adds better security & also allows collaboration between other team members. When the state file is stored locally then surely you can share it with your team nor they cant access to it at ease.

Another benefit of using (Terraform’s) remote backend is “state locking”. In order to make sure your backend locks your state when someone is provisioning resources. (Whenever two team members try to make changes at the same time, the state file could become corrupted) S3 doesnt enable state locking function by default. If you like to add state locking to your S3 backend please review this Terraform documentation.

  • Create S3 Bucket using the AWS CLI with a unique name & add random numbers as bucket name.(AWS needs you to make it unique name around the world who uses S3 buckets)
$ aws s3api create-bucket --bucket my-bucket --region us-east-1
Make sure to put your unique name-number ^^^

(or)

  • If you have already created my S3 bucket in the past like me for remote-backend, lets just validate the name so i can put it in backend.tf file
aws s3 ls
  • Open the backend.tf file in the repo you have downloaded and update the bucket name to your newly created bucket name.
  • Before you initialize Terraform, make sure CMD terminal is pointing to the same directory where your repos are downloaded
cd <path>

& run the following code to initialize terraform plugins, the modules & to use remote backend service.

  • Make sure the terraform is using the S3 backend by this command:
$ aws s3api list-objects --bucket [bucket name]
  • After using my bucket name i verified that state file does exist in s3 bucket:

Part 3:- Verify the Code

  • Let’s apply the code to verify everything is working as it should before creating our pipeline.
$ terraform fmt command is used to rewrite Terraform configuration files to a canonical format and style.$ terraform validate runs checks that verify whether a configuration is syntactically valid.$ terraform plan command is used to create an execution plan.$ terraform apply --auto-approve command is used to apply the changes required to reach the desired state of the configuration.
It will also write data to the .tfstate file & auto-approve option doesnt asks you to type yes for executing creating of resources.
Once all the resources is deployed, Verify the resources are deployed,check one of the ec2 instances Public IP Address or alternatively you can check on ec2 web console.
$ aws ec2 describe-addresses
  • Once you are done verifying, Its time to delete the resources using the command:
$ terraform destroy 
type "yes" to confirm it.
terraform destroy
  • Before we go further make sure to push your code to Github repository
  • Once you verify that your modified code is working, push your changes to your GitHub.
$ git push origin master

Part 4:- Its time to deploy Jenkins Server

  • You can create EC2 instance,configure & install Jenkins manually we are going to make things as efficient & faster we can.
  • In order to do that lets use an AMI from the AWS Marketplace.
  • There will be a small cost associated with this because of the instance type we are using for Jenkins.

Steps for Jenkins installation :-

  1. Navigate to AWS Marketplace.
  2. Click Discover products.
  3. Search for “Jenkins” and select Jenkins Certified by Bitnami.

4. After reviewing the AMI, you can see that this AMI & EC2 instance class doesnt come under (t3a.small) free-tier of AWS account but its only very small amount being charged.

  • Press Continue to Subscribe button. > Continue to Configuration button > Continue to Launch button.
  • Choose Action select Launch from Website > Select EC2 Instance Type select t3a.small.
  • Leave the default VPC Settings (or customize it if you need.)
  • Leave the default Subnet Settings (or customize it if you need.)
  • For Security Group Settings select a security group that has SSH, HTTP, HTTPS configured. or you can create one by pressing Create New Based On Seller Settings button.
  • For Key Pair settings select a key pair that you’d like to use to SSH into the instance later or create one & use it.
  • Click Launch & go to EC2 Console.
  • Select one of the instances which was deployed Select Actions > Monitor and troubleshoot > Get system log.

Note: If you don’t see anything in the system log wait 5–10 minutes and check again.

  • Refresh the console & wait for few minutes to find the credentials in system logs.
  • In order to get it working,we need to manually install Terraform on our Jenkins server by SSH using public IP Address ssh bitnami@<public ip>
  • Once you have successfully SSH into your Jenkins instance, open local browser & open this link scroll down to Linux.(as you already know this is a 64bit-linux flavour ec2 instance) just copy link.
  • Open your terminal & paste the link with wget command & run the following commands:
$ wget https://releases.hashicorp.com/terraform/1.0.3/terraform_1.0.3_linux_amd64.zip$ unzip terraform_1.0.3_linux_amd64.zip             
$ sudo mv terraform /usr/bin
$ terraform -v
jenkins server
  • Sign in using the provided user name and password with the same public IP Address from the EC2 instance.

Part 5 :- Its time to Configure Terraform on Jenkins

  • Once you login, you will see Manage Jenkins from left hand navigation, click on it.
Courtesy Troy Ingram
  • Then go to Manage Plugins section > go to available tab
  • Search for terraform & press Install without restart
  • Make sure it installed successfully
  • Click Manage Jenkins & Click Global Tool Configuration from System configuration section.
  • Scroll down to the Terraform section and click Add Terraform.
  • Enter a Name of your choice. I’m going to use “terraform-labs” to make things simple.
  • Ensure Install automatically is unselected. It’s selected by default.
  • For Install directory enter /usr/bin. Click Save.

Manage AWS Credentials on Jenkins

  • Click Manage Jenkins > then Click Manage Credentials in the Security section.
  • Click Jenkins > Global credentials (unrestricted)> Add Credentials
  • In Kind click drop-down select Secret text. For Secret paste your AWS Access Key for your user & For ID type “AWS_ACCESS_KEY_ID”. click OK.
  • Repeat the same steps for your “AWS_SECRET_ACCESS_KEY” with secret key from aws key-pair of the AWS account you are currently using.

Part 6:- Configure Jenkins Pipeline

  • Go back to Dashboard & Click on New Item.
  • Enter an item name & Then Select Pipeline & click ok in bottom
  • In Pipeline Section> Select Definition drop down select Pipeline script from SCM.
  • Enter the following: SCM: Git| Repository URL: Your GitHub Repo with your Jenkinsfile
  • Branch: Your primary branch
  • Repository browser: Auto
  • Script Path: “Jenkinsfile”
  • Once done Click Save.

Part 7:- Run Jenkins Pipeline

  • Select Build with Parameters from the left navigation.
  • For the environment parameter type the name you want to use for your Workspace. The default is “terraform”. Check autoApprove and destroy unchecked. Click Build.
  • Now you should see the steps of the pipeline begin and it takes few minutes to complete each stage.
  • Review the plan & then click Proceed.
  • While the final step is applied of our infrastructure you can click & select the Logs button.
  • It will display in real time your infrastructure being created. Below is a snippet from the Apply step logs.
  • You can see finally our Jenkins Pipeline has completed!
Courtesy Troy Ingram

Part 8:- Verify everything is working

  • Navigate to the AWS Console > EC2 Dashboard and grab the public IP of one of our t2.micro instances.

Check in the new tab and verify the website is working. Refresh the page few time to make sure the Application Load Balancer is able to switch between instances. Make sure terminate an instance as well to verify the Auto Scaling group is working.

Part 9:- Destroying Our Infrastructure

  • Another benefit of using Terraform with Jenkins is that it can destroy with one click, Build new Pipeline, but select this time destroy parameter.
  • Make sure all the infrastructure is being destroyed fully, also you can confirm it in ec2 console.
Courtesy Troy Ingram

You have reached end of this project, Feel free to take your time to experiment on this lab and share!

Thank you for spending time on my post, Please Clap!| Comment | Share if you like my content. Im a DevOps Enthusiast practicing my skills, learning, trying to help and grow together, i make more weekly projects! You can follow me here & on: https://www.linkedin.com/in/mynameisameed/

--

--

Sameed Uddin Mohammed

Terraform|3x-AWS|2x-Azure|2x-GCP|CertifiedCloud/DevOpsEngineerLooking for better opportunities- Remote| Join me https://chat.whatsapp.com/EiCi7XYnCSD7BQnPz2mJIa