Integrating Ansible and Jenkins with Terraform to make a powerful infrastructure

Image Credit: https://miro.medium.com/max/1077/1*3ewRImzpkYHMHL-tzQunvA.png

As an advanced step to the previous version, to create a fully automated and fully configured infrastructure there are many configuration management tools available in the market. And one of the powerful tools among them is Ansible.

Ansible is an open-source tool that does software provisioning, configuration management, and application deployments. It takes over a newly created server instance and installs the required software based on a playbook.

Installation of Ansible:

Refer to this page to get a detailed guide on the installation of Ansible on your preferred OS.

New Features:

  • Added a way to save the .pem private key on the host os.
  • Ansible playbook to configure the external volume.
  • Continuous Integration using Jenkins.

Let’s get started with the addition of code to save the .pem key file to the local machine. So that by using this key, we can log in to the remote instance using SSH.

Save .pem private key file:

The following code will save the auto-generated EC2 private key as a .pem file in our local machine.

resource "tls_private_key" "ec2_private_key" {algorithm = "RSA"rsa_bits  = 4096provisioner "local-exec" {command = "echo '${tls_private_key.ec2_private_key.private_key_pem}' > ~/Desktop/${var.key_name}.pem"}}

Here, the private key file will get saved in the Desktop directory. But, by default, this file gets saved with public access on the local machine. Now, to use this key to log in to the EC2 instance, we will be required to make the key private. We can make this by giving lesser permissions to the .pem key file.

resource "null_resource" "key-perm" {depends_on = [tls_private_key.ec2_private_key,]provisioner "local-exec" {command = "chmod 400 ~/Desktop/${var.key_name}.pem"}}

Now, our key is successfully get saved to the local machine and is ready to use for login.

Ansible-Playbook to automate the configuration:

In the previous version, we had used a shell script file(task1/vol.sh) to automatically configure the external volume. But, the most efficient way to configure the server is to make use of the configuration management tool. Ansible provides a way to write a playbook that configures the server for us.

Let’s take a look at the ansible-playbook that will configure the external volume to make it ready to use.

This playbook is named as master.yml

- hosts: allbecome: truetasks:- name: Install httpdcommand: yum install httpd -ybecome: yesbecome_method: sudobecome_user: root- name: Start httpdcommand: systemctl start httpdbecome: yesbecome_method: sudobecome_user: root- name: Enable httpdcommand: systemctl enable httpdbecome: yesbecome_method: sudobecome_user: root- name: Install gitcommand: yum install git -ybecome: yesbecome_method: sudobecome_user: root- name: Create a new primary partition with a size of 1GiBparted:device: /dev/xvdcnumber: 1state: presentpart_end: 500MiB- name: Format the partition, mount it to the /var/www/htmlshell: |mkfs -t ext4 /dev/xvdc1mount /dev/xvdc1 /var/www/html- name: Copy the code from github to the /var/www/htmlshell: |cd /var/www/htmlgit clone https://github.com/cankush625/Web.git

Let’s understand what this file does.

On executing this playbook with the following command inside the terraform code:

// Configuring the external volumeresource "null_resource" "setupVol" {depends_on = [aws_volume_attachment.myWebVolAttach,]//provisioner "local-exec" {command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ec2-user --private-key ~/Desktop/${var.key_name}.pem -i '${aws_instance.myWebOS.public_ip},' master.yml"}}

It will log in to the remote instance using the IP address, .pem key and user name provided to the ansible-playbook command. Here, our connection is established with the EC2 instance.

Now, it will go inside the instance(remote OS) and start executing the tasks line by line. Here, one thing to notice is that we are executing the commands as root user(superuser).

Inside the EC2 instance, it will first install the httpd service, starts the service, and enables it. After that, it will install the git.

As, we already had attached the new volume(dev/xvdc), to make this volume usable we need to make the partition in this volume. We are making the partition of size 500MiB as dev/xvdc1.

After that, we will format this partition as ext4 and mount it to the /var/www/html folder which is our default folder to use with httpd service.

Finally, we will change the current directory to the /var/www/html and clone the git repository with the website code here. As our external volume(persistent storage) is mounted to the /var/www/html folder the cloned code from Github is automatically saved to the external volume and accessed through /var/www/html directory.

Run the terraform code:

First of all, we will use a terraform init command to download the required plugins for the code.

After that, we will run the code by using a terraform apply command.

Terraform apply: ansible-playbook run

As we are seeing in the image above, the ansible-playbook will be executed line by line in the EC2 instance(remote OS).

Terraform apply: ansible-playbook successful

The above image shows that ok=8 which means all of the 8 tasks given by us to the ansible-playbook are completed successfully and hopefully our server or volume is get configured as we required.

Let’s check if our volume is get configured correctly. And one of the best way to check this is to run our deployed website.

Website is accessed and displayed correctly

Our website is getting accessed successfully and displayed all of the content correctly. That means, our server is get configured correctly using the ansible-playbook.

Continuous Integration using Jenkins:

Image Credit: https://miro.medium.com/max/1216/1*xvtglV_UwvYkzu62_723tg.png

Creating webhook for Jenkins Job:

Webhook is required to make a connection with the Jenkins job and to trigger the job when changes made to the Github repository.

Creating webhook

The green tick in front of our webhook indicates that the connection between the webhook and Jenkins is established.

Connection established

Creating and configuring the Jenkins job:

Create a job
Set SCM
Running the terraform code

Note: To make use of the Sudo command in Jenkins shell, you may be required to configure the sudoers file on the local system.

Send email on job completion

When we commit the changes into the Github repo,

Committing changes in Github repository

Our job will automatically get triggered and start executing the terraform code.

Jenkins job is triggered and our terraform code execution is started

So, in this module, we had integrated our terraform code with the Ansible to make the configuration management, and finally, we made integration with Jenkins to create a continuous integration pipeline.

Here is the link for the source code: https://github.com/cankush625/Terraform/tree/master/task1_auto

Updating with EFS(Elastic File System):

Amazon EFS file systems store data and metadata across multiple Availability Zones in an AWS Region. EFS file system can be accessed from multiple devices as the same time as well as we are able to edit the files inside EFS file system on the go.

Here, we are removing the EBS from the earlier project and use the EFS file system.

So, let’s start by adding the terraform code for EFS.

  • Creating EFS:
resource "aws_efs_file_system" "myWebEFS" {
creation_token = "myWebFile"
tags = {
Name = "myWebFileSystem"
}
}
  • Mounting EFS:
resource "aws_efs_mount_target" "mountefs" {
file_system_id = "${aws_efs_file_system.myWebEFS.id}"
subnet_id = "subnet-2f0b3147"
security_groups = ["${aws_security_group.allow_tcp_nfs.id}",]
}
  • Configuring EC2 and EFS:
resource "null_resource" "setupVol" {
depends_on = [
aws_efs_mount_target.mountefs,
]
//
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ec2-user --private-key ~/Desktop/${var.key_name}.pem -i '${aws_instance.myWebOS.public_ip},' master.yml -e 'file_sys_id=${aws_efs_file_system.myWebEFS.id}'"
}
}

Here, in the above command, we are passing one variable named file_sys_id which has the EFS file system id. This variable will be used while configuration management in master.yml.

  • Configuration file:

master.yml file

- hosts: allbecome: truetasks:- name: Install httpdcommand: yum install httpd -ybecome: yesbecome_method: sudobecome_user: root- name: Start httpdcommand: systemctl start httpdbecome: yesbecome_method: sudobecome_user: root- name: Enable httpdcommand: systemctl enable httpdbecome: yesbecome_method: sudobecome_user: root- name: Install gitcommand: yum install git -ybecome: yesbecome_method: sudobecome_user: root- name: Install AWS EFS tools utilitiescommand: yum install -y amazon-efs-utilsbecome: yesbecome_method: sudobecome_user: root- name: Mounting EFS at /var/www/htmlcommand: mount -t efs {{ file_sys_id }}:/ /var/www/htmlbecome: yesbecome_method: sudobecome_user: root- name: Edit fstab so EFS automatically loads on rebootcommand: echo {{ file_sys_id }}:/ /var/www/html efs defaults,_netdev 0 0 >> /etc/fstabbecome: yesbecome_method: sudobecome_user: root- name: Copy the code from github to the /var/www/htmlshell: |cd /var/www/htmlgit clone https://github.com/cankush625/Web.git

After installing the required packages like https, git as we see earlier, now, this file will install another package to work with EFS i.e. amazon-efs-utils.

After that, we will mount the EFS file system at var/www/html and edit fstab so EFS automatically loads on reboot.

In the final step, this file clone the website code at var/www/html from Github. And, as we had mounted the EFS file system at var/www/html, our code is get saved in the EFS and this code is now accessible by any of the instances (remote OS) through EFS and we can modify it as well.

Let’s run the code:

  • Executing Terraform code:

After successful execution, this code will create and mount the EFS file system.

Now, create another instance which is exactly the same as that of the first instance created by running the Terraform code.

Creating another identical instance

Now, if you the configuration, the EFS file system is automatically attached to the new instance we are going to launch.

EFS file system is already attached

Now, we have two running instances.

Out of these two instances, we had cloned the code from Github to the first instance at /var/www/html. So, our site is up is and running.

Website running from the first instance

But, if we try to access the website from the second instance, it can be accessible. This is because we had attached the EFS file system at /var/www/html in the second instance as well.

Website running from the second instance

Here is the link for the source code: https://github.com/cankush625/Terraform/tree/master/task1_efs

Thanks a lot for reading my blog! If you liked this article, please applaud it. You can also follow me on Twitter at @cankush625 or find me on LinkedIn. I’d love to hear from you if I can help you with Cloud Computing and Terraform.

Tech blogger, researcher and integrator