Docker is a great tool for containerization. But when it comes to scaling the applications, we required great resilience. Docker alone is not sufficient to guarantee the high availability and reliability of the deployments. So, we required some tools that are capable of managing the containers. One of the great container orchestration tools is the Kubernetes. Kubernetes provides us features like automating deployments, scaling, and management of the containerized applications. But Kubernetes running on local data centers is not sufficient to achieve higher availability. To overcome this issue, we will run the Kubernetes on the cloud. For using Kubernetes on the cloud, there are two ways available. We can either use the fully managed Kubernetes As A Services from cloud providers like AWS EKS, Azure AKS, GKE, etc OR we can create our own Kubernetes cluster on the cloud with the help of EC2 instances or VM instances.
In this article, we will create our own Kubernetes cluster on AWS cloud and deploy the WordPress application on it. The Kubernetes cluster creation will be automated using Ansible roles.
- Ansible should be installed
- AWS account should be created and activated
- RHEL v8 OS is recommended
Step 1: Provision two EC2 instances
In this demo, we are using AWS cloud for creating the Kubernetes cluster. So, let’s launch two EC2 instances on the AWS cloud. One instance will serve as a Kubernetes master node and the other instance will serve as a Kubernetes slave node.
Step 2: Create Kubernetes cluster
Now, let’s configure Kubernetes master and slave on EC2 instances.
Recently, I had created one Ansible collection that contains Ansible roles for configuring Kubernetes master and slave nodes. This collection is capable of configuring the Kubernetes cluster on Amazon Linux 2, RHEL8, and Ubuntu Server OS.
Jumpstart your automation project with great content from the Ansible community
I’m assuming you have already installed an ansible. For Ansible installation, you can refer to this blog.
Launching a website in a docker using Ansible
To perform this project, I’m using RedHat version8 operating system for the controller as well as managed nodes.
Now, we will install the ansible collection using the ansible-galaxy command for configuring the Kubernetes cluster.
$ ansible-galaxy collection install cankush625.kubernetescluster
After running the above command, the ansible collection will be installed. For configuring the Kubernetes cluster using this collection, we need to write a playbook.
Configure Kubernetes master node
The following playbook will be used for configuring the Kubernetes master node.
In the above playbook, the kube_master is a host group that will have the IP of the instances that we want to configure as Kubernetes master nodes.
Using collections keyword, we are including the collection that we had just installed. The role comes from the collection we had included in this playbook. The kube_master role requires three variables that are pod_network_cidr contains the IP range for the pod network. The owner and group variables contain the username and group name.
Run the above playbook using the following command and wait till the process completes.
$ ansible-playbook kube_master.yml
It will automatically configure the Kubernetes master node and provides one URL for joining the slave nodes. Copy this URL and keep it with you. We will require this URL while configuring the Kubernetes slave nodes.
Configure Kubernetes slave node
Similar to the master node, we will use the following playbook for configuring the Kubernetes slave nodes.
We will use a kube_slave role from the collection for configuring Kubernetes slave nodes. This role required to pass one variable that is the kube_join_command. Provide the URL that we had copied earlier to the kube_join_command variable.
This playbook uses the kube_slave host group. So, make the entries of the instances that we want to configure as Kubernetes slave nodes in the kube_slave host group in the Ansible inventory.
Finally, run the above playbook.
$ ansible-playbook kube_slave.yml
Finally, we have our Kubernetes cluster ready. Go to the Kubernetes master node and run the following command to check if the slave node is connected.
$ kubectl get nodes
For the sake of this practical, I had configured the kubectl command on the master node. But you can configure the kubectl command on the local machine as well (That will be the topic of another article).
Step 3: Write YAML file for configuring WordPress
Now, we have the Kubernetes cluster ready to use. Let us write a YAML file for creating a deployment for the WordPress application.
This wordpress application will use MySQL as a database. So, let's create a deployment for MySQL.
Step 4: Write YAML file for configuring MySQL
Step 5: Write kustomization file
The kustomization.yaml is the special kind of file in the Kubernetes world. Using this file, we can run the YAML files in the sequence. We need to pass the YAML files in the sequence in which we want to run these files.
Step 6: Deploying the WordPress application and MySQL database
Now, we have all of the files ready for deploying the wordpress application and the MySQL database.
We can run all of the files using a single command and that will deploy the entire application with the database.
$ kubectl apply -k .
This will deploy the WordPress application as well as the Mysql database.
Let's expose the wordpress deployment.
$ kubectl expose deployment.apps/mywordpress --type=NodePort --port=80
$ kubectl get svc
Now, deployment is exposed successfully.
Finally, we can access the WordPress site using the IP of the Node.