Creating Complete Infrastructure to Launch Website Using Terraform and AWS

Terraform + AWS

Terraform is so much intelligent tool that can be used to access all of the public clouds and private clouds. It’s an Infrastructure As Code(IaC) offering means we can write the code(documenting everything) and after executing that code our whole infrastructure will be created precisely.

In this article, We will discuss how to create a complete infrastructure to launch a website using Terraform and AWS.

File Structure for Project:

Project File Structure

Here, is the main file where all of the terraform code is written. This file contains several functions to launch AWS services like Creating Key-Pair, Launching New Ec2 instance, Creating Volume, mounting volume to the EC2 instance, and formatting volume to make it ready to use, integrating S3 bucket and CloudFront, etc.

Let’s discuss all of these features in detail.

  • Configuring AWS:
// Configuring the provider informationprovider "aws" {region = "ap-south-1"profile = "ankush"}

Specifying the provider we are going to use(in my case AWS), default region, and the profile. The profile contains all of the credentials to log in to the AWS.

Crating new Profile: Run the following command and follow the steps.

$ aws configure --profile ankush
  • Creating Private EC2 Key-Pair:
// Creating the EC2 private keyvariable "key_name" {default = "Terraform_test"}resource "tls_private_key" "ec2_private_key" {algorithm = "RSA"rsa_bits  = 4096}module "key_pair" {source = "terraform-aws-modules/key-pair/aws"key_name   = "Terraform_test"public_key = tls_private_key.ec2_private_key.public_key_openssh}

Above code will create a private EC2 Key-Pair named Terraform_test

  • Creating AWS Security Group:
/ Creating aws security resourceresource "aws_security_group" "allow_tcp" {name        = "allow_tcp"description = "Allow TCP inbound traffic"vpc_id      = "vpc-4ae4f922"ingress {description = "TCP from VPC"from_port   = 80to_port     = 80protocol    = "tcp"cidr_blocks = [""]}ingress {description = "SSH from VPC"from_port   = 22to_port     = 22protocol    = "tcp"cidr_blocks = [""]}ingress {description = "HTTPS from VPC"from_port   = 443to_port     = 443protocol    = "tcp"cidr_blocks = [""]}egress {from_port   = 0to_port     = 0protocol    = "-1"cidr_blocks = [""]}tags = {Name = "allow_tcp"}}

This code will create a new security group named allow_tcp. This will allows HTTP, HTTPS, and SSH protocols.

  • Launching a new EC2 instance:
// Launching new EC2 instanceresource "aws_instance" "myWebOS" {ami = "ami-0447a12f28fddb066"instance_type = "t2.micro"key_name = var.key_namevpc_security_group_ids = ["${}"]subnet_id = "subnet-2f0b3147"tags = {Name = "TeraTaskOne"}user_data = "${file("")}"}

The above code will launch a new EC2 instance named TeraTaskOne.

  • Creating new Volume and attaching this volume to the above EC2 instance:
// Creating EBS volumeresource "aws_ebs_volume" "myWebVol" {availability_zone = "${aws_instance.myWebOS.availability_zone}"size              = 1tags = {Name = "TeraTaskVol"}}// Attaching above volume to the EC2 instanceresource "aws_volume_attachment" "myWebVolAttach" {device_name = "/dev/sdc"volume_id = "${}"instance_id = "${}"skip_destroy = true}

This code is used to create a new EBS volume named TeraTaskVol. And the aws_volume_attachment will attach this volume to the EC2 instance.

But, the problem here is that we cant use the attached volume directly to store the data. We required to create a partition first and format the partition with the Linux file system(ext4). To perform this task automatically, code is written in the file.

#!/bin/bashsudo yum install httpd -ysudo systemctl start httpdsudo systemctl enable httpdsudo yum install git -yfdisk /dev/xvdc << FDISK_CMDSgn1+500MiBn2t183t283wFDISK_CMDSmkfs -t ext4 /dev/xvdc1mount /dev/xvdc1 /var/www/htmlcd /var/www/htmlgit clone -r /assets

This script will install httpd service and git. After that, it will start and enable the httpd service. As we had mounted new volume at dev/sdc, we will create a partition in this volume using disk command. As disk command accepts console options, we will write these options per line at the expected step number. After that formatting the first partition with ext4 file system.

To use this volume, we will need to mount it to the folder. We will mount this drive to the /var/www/html folder because this is the default folder used by the httpd service. We will clone our GitHub repository having website data. As we only required code in the volume, we will remove the assets folder from the volume.

All of these steps from the file are required to be run inside the EC2 instance. So, we will call this file inside the aws_instance step as user_data. See Launching a new EC2 instance step.

  • Creating a private S3 bucket:
// Creating private S3 Bucketresource "aws_s3_bucket" "tera_bucket" {bucket = "terra-bucket-test"acl    = "private"tags = {Name        = "terra_bucket"}}// Block Public Accessresource "aws_s3_bucket_public_access_block" "s3BlockPublicAccess" {bucket = "${}"block_public_acls   = trueblock_public_policy = truerestrict_public_buckets = true}

Creating a private bucket named terra-bucket-test And we will all of the public access to the bucket. The content of the bucket can only be accessed by using the URL given by the CloudFront. So, let’s create a CloudFront distribution.

  • Creating CloudFront distribution:
//locals {s3_origin_id = "myS3Origin"}// Creating Origin Access Identity for CloudFrontresource "aws_cloudfront_origin_access_identity" "origin_access_identity" {comment = "Tera Access Identity"}resource "aws_cloudfront_distribution" "s3_distribution" {origin {domain_name = "${aws_s3_bucket.tera_bucket.bucket_regional_domain_name}"origin_id   = "${local.s3_origin_id}"s3_origin_config {# origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567"origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"}}enabled             = trueis_ipv6_enabled     = truecomment             = "Terra Access Identity"default_cache_behavior {allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]cached_methods   = ["GET", "HEAD"]target_origin_id = "${local.s3_origin_id}"forwarded_values {query_string = falsecookies {forward = "none"}}viewer_protocol_policy = "allow-all"min_ttl                = 0default_ttl            = 3600max_ttl                = 86400}# Cache behavior with precedence 0ordered_cache_behavior {path_pattern     = "/content/immutable/*"allowed_methods  = ["GET", "HEAD", "OPTIONS"]cached_methods   = ["GET", "HEAD", "OPTIONS"]target_origin_id = "${local.s3_origin_id}"forwarded_values {query_string = falseheaders      = ["Origin"]cookies {forward = "none"}}min_ttl                = 0default_ttl            = 86400max_ttl                = 31536000compress               = trueviewer_protocol_policy = "redirect-to-https"}# Cache behavior with precedence 1ordered_cache_behavior {path_pattern     = "/content/*"allowed_methods  = ["GET", "HEAD", "OPTIONS"]cached_methods   = ["GET", "HEAD"]target_origin_id = "${local.s3_origin_id}"forwarded_values {query_string = falsecookies {forward = "none"}}min_ttl                = 0default_ttl            = 3600max_ttl                = 86400compress               = trueviewer_protocol_policy = "redirect-to-https"}price_class = "PriceClass_200"restrictions {geo_restriction {restriction_type = "blacklist"locations        = ["CA"]}}tags = {Environment = "production"}viewer_certificate {cloudfront_default_certificate = true}retain_on_delete = true}

This code will create a CloudFront distribution by using an S3 bucket. in this bucket, we will store all of the assets of our site like images, icons, etc. This CloudFront distribution will provide us one URL. By using this URL, we can access the objects inside the bucket.

Here we require that whenever the Infrastructure destroyed, the CloudFront distributed should not get destroyed. Because, if we create new CloudFront distribution each time, we will be required to change the assets URLs every time. To overcome this problem we will set retain_on_delete to true. This will disable the distribution instead of deleting it when destroying the resource through Terraform.

  • Setting a bucket policy for CloudFront:
// AWS Bucket Policy for CloudFrontdata "aws_iam_policy_document" "s3_policy" {statement {actions   = ["s3:GetObject"]resources = ["${aws_s3_bucket.tera_bucket.arn}/*"]principals {type        = "AWS"identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]}}statement {actions   = ["s3:ListBucket"]resources = ["${aws_s3_bucket.tera_bucket.arn}"]principals {type        = "AWS"identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]}}}resource "aws_s3_bucket_policy" "s3BucketPolicy" {bucket = "${}"policy = "${data.aws_iam_policy_document.s3_policy.json}"}

The code above will set the bucket policy for CloudFront.

  • Uploading assets to the S3 bucket:
// Uploading files to S3 bucketresource "aws_s3_bucket_object" "bucketObject" {for_each = fileset("/home/cankush/Downloads/assets", "**/*.jpg")bucket = "${aws_s3_bucket.tera_bucket.bucket}"key    = each.valuesource = "/home/cankush/Downloads/assets/${each.value}"content_type = "image/jpg"}

Our assets(images) are stored in the folder /home/cankush/Downloads/assets. So, we required to upload these assets recursively. The code above will upload all of the files from the local assets folder to the S3 bucket.

So, finally, our code is ready and can be executed to launch the whole infrastructure with a single click.

Executing the above code:

Run the following commands in order to execute the code.

$ terraform init

The terraform init command will download all of the required plugins for our terraform code.

$ terraform apply

The terraform apply command will execute the terraform code.

$ terraform apply

As we are declared in the code, a new instance is launched named TeraTaskOne.

EC2 instance launched

The new volume named TeraTaskVol is created and attached to the EC2 instance.

EBS volume is created and attached to the EC2 instance

CloudFront distribution with the comment “Terra Access” is created.

CloudFront distribution is created

S3 bucket named terra-bucket-test is created, made private and the image is uploaded to the bucket.

S3 bucket is created and the image is uploaded

After our code is executed successfully, our infrastructure is ready and our website is already deployed and, up and running.

Let's see our website.

Website is up and running

The code content that we are seeing in the website is stored in the external EBS volume. And the image is stored in the S3 bucket.

This integration gives us so much power that we can launch the whole infrastructure by a single click.

Also destroying this infrastructure is pretty easy. Just type the following and command and our whole infrastructure is destroyed permanently.

$ terraform destroy

Here is the link for the source code:

Thanks a lot for reading my blog! If you liked this article, please applaud it. You can also follow me on Twitter at @cankush625 or find me on LinkedIn. I’d love to hear from you if I can help you with Cloud Computing and Terraform.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store

Ankush Chavan

Tech blogger, researcher and integrator