Infrastructure Automation with Terraform & AWS

Samkit Shah
5 min readJun 15, 2020

As a DevOps Engineer we always try to make all the process automated. Using AWS as a primary cloud service provider we can’t always log in to the AWS console and provision the servers and other AWS services. But with the help of Terraform without opening the AWS GUI or CLI we can automate all the processes , from creating an instance and logging in the system to any type of modification.

What is Terraform?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire data-center.

Created a Provider :
A provider is responsible for understanding API interactions and exposing resources. I’m using IAM user “samkit” and region where I want to run is mention below.

provider “aws” {
region = “ap-south-1”
profile = “samkit”

Created a Security group:

A security group acts as a virtual firewall for your instance to control incoming and outgoing traffic. Edited which traffic webserver or protocol can access my website.

resource “aws_security_group” “tsg” {
name = “myfirewall”
description = “Allow inbound traffic”
vpc_id = “vpc-41f0ed29”

ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
}

Created an Instance:

An instance is a virtual server in the AWS cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Output prints my instance IP Address.

resource “aws_instance” “os1” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = “key1”
security_groups = [“myfirewall”]
tags = {
Name = “OsFromTerraform”
}
}
output “OsId” {
value = aws_instance.os1.id
}

Created a Volume (EBS):

Created a block storage and attached it to the instance . Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

resource “aws_ebs_volume” “ebsvol”{
availability_zone = aws_instance.os1.availability_zone
size = 1

tags = {
Name = “myos1ebs”
}
}

output “ebsId” {
value = aws_ebs_volume.ebsvol.id

}
resource “aws_volume_attachment” “ebsat”{
device_name = “/dev/sdd”
volume_id = aws_ebs_volume.ebsvol.id
instance_id = aws_instance.os1.id

}

Created a snapshot of the EBS:

An EBS snapshot is a point-in-time copy of your Amazon EBS volume .

resource “aws_ebs_snapshot” “snapshot” {
volume_id = aws_ebs_volume.ebsvol.id

tags = {
Name = “FromTerraSnap”
}
}

Created S3 bucket and origin access Identify for Cloudfront:

S3 bucket is a public cloud storage resource available in Amazon Web Services’ (AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are similar to file folders, store objects, which consist of data and its descriptive metadata.

resource “aws_s3_bucket” “b” {
bucket = “samkit-t-bucket”
acl = “private”

tags = {
Name = “My bucket”
}
}
output “s3out”{
value = aws_s3_bucket.b
}

locals {
s3_origin_id = “myS3Origin”
}
resource “aws_cloudfront_origin_access_identity” “origin_access_identity” {
comment = “Some comment”
}
output “origin_access_identity” {
value = aws_cloudfront_origin_access_identity.origin_access_identity
}
data “aws_iam_policy_document” “s3_policy” {
statement {
actions = [“s3:GetObject”]
resources = [“${aws_s3_bucket.b.arn}/*”]
principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}
statement {
actions = [“s3:ListBucket”]
resources = [“${aws_s3_bucket.b.arn}”]
principals {
type = “AWS”
identifiers = [“${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}”]
}
}
}
resource “aws_s3_bucket_policy” “example” {
bucket = aws_s3_bucket.b.id
policy = data.aws_iam_policy_document.s3_policy.json
}

Created Cloudfront distribution:

resource “aws_cloudfront_distribution” “s3_distri” {
origin {
domain_name = aws_s3_bucket.b.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = “none”
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

Remote Login to the system and downloaded the required software:
The null_resource resource implements the standard resource lifecycle but takes no further action . Connection helps us to log in to the system and to the required process. Provisioners are used for executing scripts or shell commands on a local or remote machine as part of resource creation/deletion.

resource “null_resource” “nullremote3” {

depends_on = [
aws_volume_attachment.ebsat,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/91811/Downloads/key1.pem”)
host = aws_instance.os1.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdd”,
“sudo mount /dev/xvdd /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/samkit-jpss/MultiCloud.git /var/www/html/”
]
}
}
output “myos_ip” {
value = aws_instance.os1.public_ip
}

After when you are done with your your HCL code , save the file and in command prompt type and run :

terraform validate

After successful validation of your code type and run :

terraform apply -auto-approve

The terraform apply command will apply the changes required to reach the desired state of the configuration.

Processing
Done!

SUCCESS!

Here the output “myos_ip” prints the IP Address of the instance which we can provide to our clients and the can see the website running . Just for testing I’ve made this html website

website deployed

Also we can Integrate terraform with Jenkins for next level automation , which I might do in my upcoming articles.

For GitHub link , Click Here : GitHub

--

--

Samkit Shah
Samkit Shah

Written by Samkit Shah

Machine Learning | Deep Learning | DevOps | MLOps | Cloud Computing | BigData

No responses yet