Creating AWS resources using Boto3|Terraform| CloudFormation and Both

Workflow IAC with AWS

Lets talk about all three before we jump in:

Boto3 (AWS SDK for Python)

You use the AWS SDK for Python (Boto3) to create, configure, and manage AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK provides an object-oriented API as well as low-level access to AWS services.

Terraform

Terraform is an open-source infrastructure as code tool that provides a consistent CLI workflow to manage hundreds of cloud vendor-specific services. Terraform codifies cloud APIs into declarative configuration files.

Tips:

  • It provisions infrastructure seamlessly
  • It updates infrastructure with ease
  • It destroys infrastructure without hassle
  • It checks codes errors with just click of a button
  • It plans out infrastructures prior to deployment
  • It tracks records from state file

CloudFormation

AWS CloudFormation gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code. A CloudFormation template describes your desired resources and their dependencies so you can launch and configure them together as a stack. You can use a template to create, update, and delete an entire stack as a single unit, as often as you need to, instead of managing resources individually. You can manage and provision stacks across multiple AWS accounts and AWS Regions.

Note: CloudFormation is for AWS proprietary IAC tool in other word, AWS native template which means its can only be used on AWS Platform. So it may not b a good fit for you if AWS is not your provider of Cloud.

Terraform + CloudFormation

Finally With AWS, we can utilize CloudFormation. With other cloud providers, we can take advantage of other resources. Either way, terraform could be seen as an IaC hub for all our Cloud projects, we can

  • provision infrastructure seamlessly
  • update infrastructure with ease
  • destroy infrastructure without hassle
  • check codes errors with click of a button
  • plan out infrastructures prior to deployment
  • track records from state file

Doing all of above using Terraform and build up infrastructure along with native IaC tool from any specific almost all Cloud providers!

Reminder: This should be part 1 of the project since I will dive in AWS CDK with Terraform and attempt to provision VPC using CDK with Terraform.

Here are the options I would touch upon throughout this project

Prerequisites:

  • An AWS account — with non-root user (take security into consideration)
  • Your Local Machine Windows 10 with Python3 installed or just search & Install Python from Windows Store.
  • Install Visual Studio Code here.
  • AWSCLI installed link here.
  • Install Terraform here

Lets start work on them one by one.

Creating a non-root user

Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.

First Create Root User (if you already have skip and login)
  • Secondly, Create a user under IAM
  • Choose programmatic access for that user before creation
  • when it says Success download the CSV file for credentials for later use.(Access key ID and Secret access key)
  • Verify the installation by opening VS Code & hold ctrl and ~ to open powershell within.

then use the following commands verify the installation

$ aws --version
aws-cli/2.2.1 Python/3.8.8 Windows/10 exe/AMD64 prompt/off

To use aws cli, we need to configure it using AWS Access Key, Secret Access Key, default region and output format:

$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:

Now Its time to install Terraform : (alternative methods link)

The Easy Way

Phew, that was a lot of work! That would be awful to do every time you had to install new software on your device. Let’s use a package manager instead. There are a few out package managers you can use to install Terraform on Windows. For Windows, my favorite is Chocolatey. It makes installing, removing and updating software as simple as a one-line command, and Terraform is no exception to that.

To install Terraform with Chocolatey, do the following steps:

  1. Open a CMD/PowerShell prompt as an administrator and install Chocolatey using the command from their install page.
  2. Once that is complete, run choco install terraform. If you like, you can also put -y on the end to auto-agree to installing it on your device.

After that command runs, you will get something like this:

Chocolatey v0.10.13
2 validations performed. 1 success(es), 1 warning(s), and 0 error(s).
Installing the following packages:
terraform
By installing you accept licenses for the packages.
Progress: Downloading terraform 0.12.6... 100%
terraform v0.12.6 [Approved]
Downloading terraform 64 bit
from 'https://releases.hashicorp.com/terraform/0.12.6/terraform_0.12.6_windows
Download of terraform_0.12.6_windows_amd64.zip (15.32 MB) completed.
--SNIP--

or

Try this method Windows for Linux Subsystem. This allows you to run Linux commands on Windows & if you want make separate test environment .

  1. Install WSL through Microsoft Store, Search Ubuntu & install latest one.

2. In your WSL shell, run apt-get install unzip You’ll need this to extract the Terraform binaries later.

3. Download Terraform by running wget https://releases.hashicorp.com/terraform/0.12.6/terraform_0.12.6_linux_amd64.zip. Remember to replace the version and architecture with the one that best fits your device. You can find the full list of Terraform Releases here.

4. Run unzip terraform_0.12.6_linux_amd64.zip terraform to unzip the contents of the zip into a folder called terraform.

5. Once the ZIP file is uncompressed, you’ll need to move it somewhere accessible by the system path. Fortunately, Linux has a folder that users can add binaries to by default. Move the Terraform binary there by running mv terraform /usr/local/bin/. The /usr/local/bin folder is already set in your system path.

6. Verify the installation was successful by running

$ terraform versionTerraform v0.14.3
+ provider registry.terraform.io/hashicorp/aws v3.21.0.
Note:If you are using WSL also use this command to generate private key
$ ssh-keygen
$ cat .ssh/id_rsa <-- to make sure your key exist in that path
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gt
.
.
.
-----END OPENSSH PRIVATE KEY-----

Installing Python3 and Boto3

Though you may have Python2 preinstalled in your system. It is, however, preferred to install latest version of Python3 for this project

For detailed installation, please visit here

Make sure you do check installation after installing

$pip install boto3
python3 --version
Python 3.9.4
pip3 --version
pip 20.2.3 from C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.1264.0_x64__qbz5n2kfra8p0\lib\site-packages\pip (python 3.9)
pip3 show boto3
Name: boto3
Version: 1.17.62
Summary: The AWS SDK for Python
Home-page: https://github.com/boto/boto3
Author: Amazon Web Services
Author-email: None
License: Apache License 2.0
Location: c:\users\msun_\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages
Requires: s3transfer, jmespath, botocore
Required-by:

Boto3 to provision our VPC

Its time to Create a Python file using Python & execute our first Boto3 script to deploy the resources in AWS

First create a folder and switch to it

$ mkdir Boto3forVPC && cd Boto3forVPC/

vim vpc.py

Run the following script to deploy the desired resources on AWS.

$ python3 vpc.py
ec2.Vpc(id='vpc-005382a4c773378a7')
[ec2.Tag(resource_id='vpc-005382a4c773378a7', key='Name', value='boto3_vpc')]
ec2.InternetGateway(id='igw-0e1ca1c17c56dea96')
ec2.RouteTable(id='rtb-0b0b2afb80ba70677')
ec2.Route(route_table_id='rtb-0b0b2afb80ba70677', destination_cidr_block='0.0.0.0/0')
ec2.Subnet(id='subnet-0b3fcb0b24adadd8b')
ec2.SecurityGroup(id='sg-03090bf8dcd216d50')
[ec2.Instance(id='i-0db6a00626adcd7d7')]

After few minutes cross check in AWS console

Bastian Host- Ec2, VPC named boto3_vpc , InternetGateway attached to it & Subnet associated with, route table (with public route)are also created.

Security group and allow SSH inbound rule through the VPC via your own ip address (Notes: for your own security, you should never expose your ip address)

Create a file to store the key locally and Call the boto3 ec2 function to create a key pair named ec2-keypair , then capture the key and store it in a file ec2-keypair.pem

$ ls 
ec2-keypair.pem vpc.py
Its time to destroy the resources we have just created using vim vpc_destroy.py & copy the code below.

use the command below to destroy the resources we have created earlier depending on your vpc_id

$ python3 vpc_destroy.py --vpc_id vpc-005382a4c773378a7 --region us-east-1 --services ec2
type: <class 'str'>
[vpc_destroy.py:243 - <module>() ] calling destroy_services with ec2
[credentials.py:1217 - load() ] Found credentials in shared credentials file: ~/.aws/credentials
[vpc_destroy.py:58 - destroy_ec2() ] instance deletion list: ['i-0db6a00626adcd7d7']
[vpc_destroy.py:60 - destroy_ec2() ] Waiting for instances to terminate
[vpc_destroy.py:246 - <module>() ] calling delete_vpc with vpc-005382a4c773378a7
[vpc_destroy.py:162 - delete_vpc() ] no ENIs remaining
destroyed vpc-005382a4c773378a7 in us-east-1

Notes: As shown above, you must provide vpc_id, region and services. In case you may want to learn more about how this VPC cleanup using Boto3, please refer to this post

Notes: Please make sure every time you apply vpc.py to create a brand new VPC, you need to provide with a brand new keypair name. Otherwise, creation can’t be done properly. To do so, you can use either AWS or AWS CLI

For AWS CLI, apply code below

$ aws ec2 delete-key-pair --key-name <name of your key pair>

Here I also would like to provide Terraforming VPC and Terraforming VPC using AWS CloudFormation

The reason behind it is to provide various ways to provision VPC to meet different needs and requirements. Also, let us figure out which option is more efficient and effective

Here we go with Terraforming VPC

Firstly, We need to create using vi vpc.tf file in same folder or create seperate using the following code:

Secondly, you will also need to create a vi variables.tf file in order provide additional preferences.

Thirdly, you will also need to create a vi terraform.tfvars file to provide variables required

Finally , you also need to create an vi outputs.tf file to provide outputs in the console after creation

Now its time to terrafrom our vpc

$ terraform init

This will initialize and install necessary plugins to make terraform run.

Makes sure to validate the syntax of the all terraform files in your directory

$ terraform validate
Success! The configuration is valid.

Then, we will plan our terraform infrastructure to make sure what resources will be deployed using this code.

$ terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_instance.bastion_host will be created
+ resource “aws_instance” “bastion_host” {
+ ami = “ami-0742b4e673072066f”
+ arn = (known after apply)
+ associate_public_ip_address = true
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = “t2.micro”
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = “ec2-keypair”
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = (known after apply)
+ tags_all = (known after apply)
+ tenancy = (known after apply)
+ vpc_security_group_ids = (known after apply)
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
# aws_internet_gateway.igw will be created
+ resource “aws_internet_gateway” “igw” {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ “Environment” = “Development”
+ “Name” = “boto3_vpc”
}
+ tags_all = {
+ “Environment” = “Development”
+ “Name” = “boto3_vpc”
}
+ vpc_id = (known after apply)
}
# aws_key_pair.bastion_host_key will be created
+ resource “aws_key_pair” “bastion_host_key” {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = “ec2-keypair”
+ key_pair_id = (known after apply)
+ public_key = (known after apply)
+ tags_all = (known after apply)
}
# aws_route_table.public_route_table will be created
+ resource “aws_route_table” “public_route_table” {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ carrier_gateway_id = “”
+ cidr_block = “0.0.0.0/0”
+ destination_prefix_list_id = “”
+ egress_only_gateway_id = “”
+ gateway_id = (known after apply)
+ instance_id = “”
+ ipv6_cidr_block = “”
+ local_gateway_id = “”
+ nat_gateway_id = “”
+ network_interface_id = “”
+ transit_gateway_id = “”
+ vpc_endpoint_id = “”
+ vpc_peering_connection_id = “”
},
]
+ tags = {
+ “Environment” = “var.environment_tag”
}
+ tags_all = {
+ “Environment” = “var.environment_tag”
}
+ vpc_id = (known after apply)
}
# aws_route_table_association.public_rt_association[0] will be created
+ resource “aws_route_table_association” “public_rt_association” {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_security_group.bastion_host will be created
+ resource “aws_security_group” “bastion_host” {
+ arn = (known after apply)
+ description = “Managed by Terraform”
+ egress = [
+ {
+ cidr_blocks = []
+ description = “”
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = “var.protocol”
+ security_groups = []
+ self = false
+ to_port = 22
},
]
+ id = (known after apply)
+ ingress = [
+ {
+ cidr_blocks = [
+ “72.137.76.221/32”,
]
+ description = “”
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = “tcp”
+ security_groups = []
+ self = false
+ to_port = 22
},
]
+ name = (known after apply)
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
}
# aws_subnet.public_subnet[0] will be created
+ resource “aws_subnet” “public_subnet” {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = “us-east-1a”
+ availability_zone_id = (known after apply)
+ cidr_block = “172.16.1.0/24”
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ tags = {
+ “Environment” = “var.environment_tag”
}
+ tags_all = {
+ “Environment” = “var.environment_tag”
}
+ vpc_id = (known after apply)
}
# aws_vpc.main will be created
+ resource “aws_vpc” “main” {
+ arn = (known after apply)
+ assign_generated_ipv6_cidr_block = false
+ cidr_block = “172.16.0.0/16”
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = true
+ enable_dns_support = true
+ id = (known after apply)
+ instance_tenancy = “default”
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ “Environment” = “Development”
+ “Name” = “boto3_vpc”
}
+ tags_all = {
+ “Environment” = “Development”
+ “Name” = “boto3_vpc”
}
}
# tls_private_key.public_key will be created
+ resource “tls_private_key” “public_key” {
+ algorithm = “RSA”
+ ecdsa_curve = “P224”
+ id = (known after apply)
+ private_key_pem = (sensitive value)
+ public_key_fingerprint_md5 = (known after apply)
+ public_key_openssh = (known after apply)
+ public_key_pem = (known after apply)
+ rsa_bits = 4096
}
Plan: 9 to add, 0 to change, 0 to destroy.Changes to Outputs:
+ cidr_block = “172.16.0.0/16”
+ gateway_id = (known after apply)
+ key_name = “ec2-keypair”
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ Environment = “Development”
}
+ vpc_security_group_ids = [
+ (known after apply),
]

Once you are ready, lets provision the infrastructure using the command below.

$ terraform apply --auto-approve
creation complete

Then lets double check our resources which have been created in AWS console

VPC named boto3_vpc created

VPC named boto3_vpc
Subnet and associate it with route table

Its time to clear up infrastructure we have created using Terraform

$ terraform destroy
.
.
Enter a value: yes
terraform destroy vpc.tf

Now we will provision VPC using CloudFormation

Here is the AWS official template for VPC as reference

Note: First create a key-pair as CloudFormation cannot create it,& you will need it to access ec2 instances.

$ aws ec2 create-key-pair --key-name ec2-keypair
{
"KeyFingerprint": "ab:b2:0c:81:9e:1e:07:d5:15:bb:a9:b1:41:b0:5d:e2:af:b5:38:23",
"KeyMaterial": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAmB0Ani+OiUMiqrlgrMc50oGo9KjsVj3MNnxbmFSyMNxuLrSp\nXLiRsSHcJK7cH6IxOXuSOE0fczJmhFQZnFLTpkOqx7ELpRBDSTFdnYfqXf5oxrjs\nf5OZRXuZqjncgxxd7f/KLEBrn5xBR03aiUkc/w5lYTR0JMjWIlIsPxIxRwF9NZ/o\n2AA6K/gX2lqteFV9mUizq0Z301Ryd/RNyeABEHhJ4fuJTb1MlBc1wEGZj2pM0aVr\n67nRV0RZhTP8uj9MKg1OAvb5yaZuvBiccoiUOshoYjgrPJsnuywH6kNLHNahbrjj\n9XirKQdQ4TogqJnaW/Vd3d9J8IwNA8LcYs88owIDAQABAoIBACxeaUu6u2y2NGpv\n4A8FnYwVXd7fVvBg3iwWYfEw4zj1Uv40nCH7hCOSqM/aYUKo4IrPzHq3pDDJxrVa\ngo3iavHYUvwkXC0tbTLwP0ov1uDL0GwGjJU5zD9EKjJI5lUn9Q3yylnWAI5x2Wif\nANuCg/6xiEiuMCJ6olsodNeAyvbWuIhR+uyFfVE4vEdoqK4l8UYWQNACOnppKgaJ\nrbcKnfi+dj0AejQInpk8Ov6UUdTEINQZ4JG/d/2DFYWktGBmHVhEm9h4WIr9HRV6\nQT6LklIZYF6e9W379Z5ZXX1aGUdSeHPAefFkNwPrNCMjLCRur+C16dUAzO6jBRpG\nYZPcauECgYEA3F/9ioQ0x4//wAQkzZb81tFqRRjKTSggx2+lTKkb/UhXtCN67XJD\nMzJoGOIVPVbHdcmpP0v0cG7g71e0gyRZ7zN0m28n6L415UXOx2/PL3qZoRb3TcAq\njm4uaTxp0pCXLRmfryqiCcZzu6cmxG6YujmtKNdBrGjuvIqNdZCutu8CgYEAsLQR\nNcvxBxAGFwSWZv5ztf5agmpgK8j9gVxcO3Qc0QIU67o78hgvVkNhYgopWt4UmKzi\nPRMMVRwX82iNpLFXwqgKQnIcueYiYXdC8F9+FSByq1TnkTFtLQJwJlRln0/TreNO\n6XN+gZBmldacExyqPAz93+vZo4lvu1LcbXybNY0CgYAkPf0afKeZcksjLwtGbGBk\ni8goWO1cRw8s/WV3+A/MVctmqrcaucHnd5C7FuNbVRw0eNfGux0WKIYBlrDvKFlK\nB3JT5bHwiueeLx7UmcS/EDCX14kQVlwpVGF5mR/mKzVRi3dBfYdsiCCcad7sSyv+\n5GFf6Ba63f71LuwYu5SgLQKBgQCTxzQxcorz5iHBtFN4dUseJEdblE0zsRbZzf1Q\nt421+nC2p/ykPjewhA94Z5koZlyBRuy6OSjyMNmS9pim6K3FnLVf1oFRszaDnrL7\nxlDyqD1eLlavpc9xef2DAMgwURlt7pE7Shy9jJ9OprnGfg2cxRy43U0ZqMIpvmWc\npz5CrQKBgCttNZrapWuh8Y6QXlMDp5sCovUAj9Fw5J11j0yZyf+i704hnEuv+/5H\nvzFuvTwaRlSIoz1Frnvhb9NF+9NYqrno2T3GOATRgThgHKN9soqJism5tG/jLGDS\n2x9P99HC94Ybd13IKDXrLK/gCVh97Zlel21CY2+Pu6K/6L3kjogH\n-----END RSA PRIVATE KEY-----",
"KeyName": "ec2-keypair",
"KeyPairId": "key-023bd02d2f97b4a31"
}

After creating key-pair, its time to create stack using AWS-CLI with CloudFormation & code below :

Using the following code you can deploy similar resources on CloudFormation

$ aws cloudformation create-stack — stack-name cfvpc — template-body file://vpc.yaml
StackId: arn:aws:cloudformation:us-east-1:050411548990:stack/cfvpc/56c90c50-ab72–11eb-b620–0adbfd4180bb

Then we will cross check our resources in AWS console >>CloudFormation in Resources Tab:

You can verify all the resources here and click on the resources to go their direct location where they are created, the benefit of CloudFormation where you can check, edit & destroy resources within AWS.

Notes: Using CloudFormation, we will also be able to print out Outputs for reference in the future

Parameters

Notes: Parameters allow us to record the parameters we input initially

Template

Notes: We are able to store an original CloudFormation template file as a backup in this section

Now it’s time for us to clean up & delete the infrastructure

aws cloudformation delete-stack --stack-name cfvpc

It is all cleaned up now

Ultimately, let us take advantage of both Terraform and CloudFormation to accomplish Terraforming CloudFormation to provision VPC

To do this project, we will need to create a brand new directory and change into it

$ mkdir tf_cf_vpc && cd tf_cf_vpc/

For the CloudFormation, we still will be using our vpc.yaml file created previously.

We also need to create our tf_cf_vpc.tf file using code below

Finally we are ready to terraform our infrastructure using IaC both tools.

$ terraform init

Then validate our code

$ terraform validate
Success! The configuration is valid.

After that, lets plan our infrastructure & make sure of your desired resources

$ terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_cloudformation_stack.tf_cf_vpc will be created
+ resource “aws_cloudformation_stack” “tf_cf_vpc” {
+ id = (known after apply)
+ name = “TfCfVpc”
+ outputs = (known after apply)
+ parameters = {
+ “InstanceType” = “t3.nano”
+ “KeyName” = “tf_cf_keypair”
}
+ policy_body = (known after apply)
+ tags_all = (known after apply)
+ template_body = <<-EOT
Description: This CloudFormation YAML file will provision a VPC
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
Default: ec2-keypair
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.micro
EnvironmentName:
Description: An environment name that is prefixed to resource names
Type: String
Default: Development
VpcCIDR:
Description: Please enter the IP range (CIDR notation) for this VPC
Type: String
Default: “172.16.0.0/16”
PublicSubnet1CIDR:
Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
Type: String
Default: “172.16.1.0/24”
Mappings:
AWSRegionToAMI:
us-east-1:
AMIID: ami-0742b4e673072066f
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock:
Ref: VpcCIDR
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
— Key: Name
Value:
Ref: EnvironmentName
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
— Key: Name
Value:
Ref: EnvironmentName
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId:
Ref: InternetGateway
VpcId:
Ref: VPC
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: VPC
AvailabilityZone: !Select [ 0, !GetAZs ‘’ ]
CidrBlock:
Ref: PublicSubnet1CIDR
MapPublicIpOnLaunch: true
Tags:
— Key: Name
Value: !Sub ${EnvironmentName} Public Subnet (AZ1)
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId:
Ref: VPC
Tags:
— Key: Name
Value: !Sub ${EnvironmentName} Public Routes
DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId:
Ref: PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId:
Ref: InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: PublicRouteTable
SubnetId:
Ref: PublicSubnet1
VPCEC2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: only allow SSH traffic
GroupName: SSH-ONLY
SecurityGroupIngress:
— CidrIp: 72.137.76.221/32
FromPort: 22
IpProtocol: tcp
ToPort: 22
Tags:
-
Key: Name
Value: CloudFormationSecurityGroup
VpcId:
Ref: VPC
VPCEC2:
Type: AWS::EC2::Instance
Properties:
ImageId:
!FindInMap
— AWSRegionToAMI
— !Ref AWS::Region
— AMIID
InstanceType: !Ref InstanceType
SecurityGroupIds:
— !GetAtt “VPCEC2SecurityGroup.GroupId”
SubnetId: !Ref PublicSubnet1
KeyName:
Ref: KeyName
Outputs:
VPC:
Description: A reference to the created VPC
Value:
Ref: VPC
PublicSubnet1:
Description: A reference to the public subnet in the 1st Availability Zone
Value:
Ref: PublicSubnet1
VPCEC2SecurityGroup:
Description: Security group with no ingress rule
Value:
Ref: VPCEC2SecurityGroup
InternetGateway:
Description: InternetGateway Information
Value:
Ref: InternetGateway
PublicRouteTable:
Description: Public Route Table Information
Value:
Ref: PublicRouteTable
VPCEC2:
Description: EC2 Information
Value:
Ref: VPCEC2
EOT
}
# aws_key_pair.bastion_host_key will be created
+ resource “aws_key_pair” “bastion_host_key” {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = “tf_cf_keypair”
+ key_pair_id = (known after apply)
+ public_key = (known after apply)
+ tags_all = (known after apply)
}
# tls_private_key.public_key will be created
+ resource “tls_private_key” “public_key” {
+ algorithm = “RSA”
+ ecdsa_curve = “P224”
+ id = (known after apply)
+ private_key_pem = (sensitive value)
+ public_key_fingerprint_md5 = (known after apply)
+ public_key_openssh = (known after apply)
+ public_key_pem = (known after apply)
+ rsa_bits = 4096
}
Plan: 3 to add, 0 to change, 0 to destroy.────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run “terraform apply” now.
Then run the following to deploy resources in AWS
$ terraform apply --auto-approve

Then we will verify & double check the resources created in AWS console

VPC named Development created

IGW created, which is attached to Development

IGW created, which is attached to Development

InternetGateway

Subnet and associate it with route table

Subnet with route table

Route table with a public route

Route table with public route

Security group and allow SSH inbound rule through the VPC via your own ip address (Notes: for your own security, you should never expose your ip address)

Ssh with your own ip address

Ec2 created

Notes: Here we will be updating one InstanceType value in our .tf file and test the power of terraform. Along with CloudFormation, we can take advantage of both Terraform’s flexibility and CloudFormation’s AWS native and easy provision

Here we will make update tf_cf_vpc.tf file & change the InstanceType from t2.micro >> to t3.large

Let us plan it again to see the difference

$ terraform planTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_cloudformation_stack.tf_cf_vpc will be created
+ resource “aws_cloudformation_stack” “tf_cf_vpc” {
+ id = (known after apply)
+ name = “TfCfVpc”
+ outputs = (known after apply)
+ parameters = {
+ “InstanceType” = “t3.large”
+ “KeyName” = “tf_cf_keypair”
}
+ policy_body = (known after apply)
+ tags_all = (known after apply)
+ template_body = <<-EOT
Description: This CloudFormation YAML file will provision a VPC
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
Default: ec2-keypair
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.micro
EnvironmentName:
Description: An environment name that is prefixed to resource names
Type: String
Default: Development
VpcCIDR:
Description: Please enter the IP range (CIDR notation) for this VPC
Type: String
Default: “172.16.0.0/16”
PublicSubnet1CIDR:
Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
Type: String
Default: “172.16.1.0/24”
Mappings:
AWSRegionToAMI:
us-east-1:
AMIID: ami-0742b4e673072066f
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock:
Ref: VpcCIDR
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
— Key: Name
Value:
Ref: EnvironmentName
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
— Key: Name
Value:
Ref: EnvironmentName
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId:
Ref: InternetGateway
VpcId:
Ref: VPC
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: VPC
AvailabilityZone: !Select [ 0, !GetAZs ‘’ ]
CidrBlock:
Ref: PublicSubnet1CIDR
MapPublicIpOnLaunch: true
Tags:
— Key: Name
Value: !Sub ${EnvironmentName} Public Subnet (AZ1)
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId:
Ref: VPC
Tags:
— Key: Name
Value: !Sub ${EnvironmentName} Public Routes
DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId:
Ref: PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId:
Ref: InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: PublicRouteTable
SubnetId:
Ref: PublicSubnet1
VPCEC2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: only allow SSH traffic
GroupName: SSH-ONLY
SecurityGroupIngress:
— CidrIp: 72.137.76.221/32
FromPort: 22
IpProtocol: tcp
ToPort: 22
Tags:
-
Key: Name
Value: CloudFormationSecurityGroup
VpcId:
Ref: VPC
VPCEC2:
Type: AWS::EC2::Instance
Properties:
ImageId:
!FindInMap
— AWSRegionToAMI
— !Ref AWS::Region
— AMIID
InstanceType: !Ref InstanceType
SecurityGroupIds:
— !GetAtt “VPCEC2SecurityGroup.GroupId”
SubnetId: !Ref PublicSubnet1
KeyName:
Ref: KeyName
Outputs:
VPC:
Description: A reference to the created VPC
Value:
Ref: VPC
PublicSubnet1:
Description: A reference to the public subnet in the 1st Availability Zone
Value:
Ref: PublicSubnet1
VPCEC2SecurityGroup:
Description: Security group with no ingress rule
Value:
Ref: VPCEC2SecurityGroup
InternetGateway:
Description: InternetGateway Information
Value:
Ref: InternetGateway
PublicRouteTable:
Description: Public Route Table Information
Value:
Ref: PublicRouteTable
VPCEC2:
Description: EC2 Information
Value:
Ref: VPCEC2
EOT
}
# aws_key_pair.bastion_host_key will be created
+ resource “aws_key_pair” “bastion_host_key” {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = “tf_cf_keypair”
+ key_pair_id = (known after apply)
+ public_key = (known after apply)
+ tags_all = (known after apply)
}
# tls_private_key.public_key will be created
+ resource “tls_private_key” “public_key” {
+ algorithm = “RSA”
+ ecdsa_curve = “P224”
+ id = (known after apply)
+ private_key_pem = (sensitive value)
+ public_key_fingerprint_md5 = (known after apply)
+ public_key_openssh = (known after apply)
+ public_key_pem = (known after apply)
+ rsa_bits = 4096
}
Plan: 3 to add, 0 to change, 0 to destroy.────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────Note: You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run “terraform apply” now.

Its time to reapply to update your instance_type changes which takes less time

$ terraform apply
.
.
.
Enter a value: yesaws_cloudformation_stack.tf_cf_vpc: Modifying... [id=arn:aws:cloudformation:us-east-1:464392538707:stack/TfCfVpc/b8b5f360-a4b5-11eb-b4b0-12d101416601]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 10s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 20s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 30s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 40s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 50s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 1m0s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 1m10s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 1m20s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Still modifying... [id=arn:aws:cloudformation:us-east-1:464392...c/b8b5f360-a4b5-11eb-b4b0-12d101416601, 1m30s elapsed]
aws_cloudformation_stack.tf_cf_vpc: Modifications complete after 1m39s [id=arn:aws:cloudformation:us-east-1:464392538707:stack/TfCfVpc/b8b5f360-a4b5-11eb-b4b0-12d101416601]Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Let us double check in AWS console in EC2 instances:

Now lets destroy the whole infrastructure one last time…

$ terraform destroy — auto-approve
destroyed tf_cf_vpc.tf

Conclusion:

Let us recap our project’s objective and what we have accomplished

Project Architecture

As shown from project architecture, we dive into 4 different ways to provision VPC using Infra-as-code tools.

  • Boto3
  • CloudFormation (AWS native)
  • Terraform (Cloud native)
  • Terraform + CloudFormation

First of all, using Boto3, it may require 2 different files to create vpc in the first place and destroy it at the end of the day. With that said, it could be a concern in terms of management

Secondly, we built up our VPC infrastructure using CloudFormation. Since it is AWS native, it did work perfectly. However, in terms of updates, we may need to work on the same file for parameters as per say. But it could still be well managed.

Thirdly, Terraform as IaC tool did shine with its flexibility. From building infrastructure, through updating to destroying at the end of the day, every single move was seamless. However, we need to spend time building infrastructure using Terraform as a brand new language. With that said, it could be time-consuming.

Finally, my preference way: Terraform + CloudFormation for AWS. Taking advantage of every aspect of Terraform, we advance our AWS infrastructure using CloudFormation. Templates were readily available on AWS official website and plenty of cloud related blogs and websites. We can simply apply the CloudFormation .yaml file with Terraform.

More importantly, Terraform’s flexibility still applies. For instance, in project, we update our InstanceType using a parameters in .tf file. This infrastructure for AWS is best of both worlds (or maybe three)

Terraform + Cloud Native IaC Tool for your Cloud provider should be the way to go in Cloud!

I will be making more projects on migrations specifically using Terraform|GCP|AWS|Ansible|Python| more.

Im making these for my own reference & inspirated by mentors

Please do Clap, Share & give me your feedback if you like Subscribe as well.

Feel free to connect with me on Linkedin if you are facing any issues with this.

Thank you for reading!! Cheers!

Terraform: Associate | AWS:SSA| RHCSA| VCP:DCV|ITIL Certified

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store