Integrating Kubernetes with Ansible
Objective:-
π Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
π
Create Ansible Playbook to launch 3 AWS EC2 Instance
π
Create Ansible Playbook to configure Docker over those instances.
π
Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
π
Convert Playbook into roles.
So as described above in this we will configure multi node cluster over cloud services provided by the Amazon. The whole task would be performed with the help of Ansible which is a tool used for configuration management .From our local system we will run different playbooks and whole configuration would be done automatically.
Brief Introduction
Ansible is a software tool that provides simple but powerful automation for cross-platform computer support. It is primarily intended for IT professionals, who use it for application deployment, updates on workstations and servers, cloud provisioning, configuration management, intra-service orchestration, and nearly anything a systems administrator does on a weekly or daily basis. Ansible doesnβt depend on agent software and has no additional security infrastructure, so itβs easy to deploy.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Googleβs experience running production workloads at scale with best-of-breed ideas and practices from the community
How the task will be performed?
- Playbook for launching three instances using AWS ec2 service.
- Second playbook for installing docker in the above launched instances.
- Playbook for setting up K8S master node
- Playbook for setting up K8S slave node
Step-1
launching three instances using ansible playbook:-
The playbook contains two tasks one for master node and second for slave node.
The variable file β/key.yaml β is used for storing the access key and secret key for authorization by amazon.
The master node is launched with the tag name Master and in the region βap-south-1β which denotes Mumbai data center.
Similarly the slave node is launched with tag name Slave in the Mumbai region with respective configuration.
Step -2
Installing docker in the above launched instances
The above playbook comprises of only a single task which would be run on the managed node having tags Master as well as Slave and task is to install docker.
Step-3
Now setting up Master using ansible playbook
The playbook created for configuring master node.
1>First task to enable docker services.
2>configuring yum repository in managed node so that we can install required packages.
3>Installing kubeadm,kubelet,kubectl
4> enabling services of Kubelet
5> changing driver for docker services from cgroup to systemd.
6>after changing driver restarting services of docker.
7>Installing iproute-tc package
8>Updating routing tables for K8S master.
9>Refreshing system as something is change in conf file.
10>As the master system would be used as client so updating ./kube directory
11>Making the master to be used as client also
12>Installing flannel that would be helpful in setting up tunneling
In the image below you can see the master is configured successfully
Step-4
Last and final step for configuring slave node for the kubernetes
The slave and master have almost similar configuration
The playbook created for configuring master node.
1>First task to enable docker services.
2>configuring yum repository in managed node so that we can install required packages.
3>Installing kubeadm,kubelet,kubectl
4> enabling services of Kubelet
5> changing driver for docker services from cgroup to systemd.
6>after changing driver restarting services of docker.
7>Installing iproute-tc package
8>Updating routing tables for K8S master.
9>Refreshing system as something is change in conf file.
10>Connecting slave to the master by running join command using shell module.
The join.yaml is the file which is used for storing join command as a variable.
Conclusion
The cluster is configured successfully with the help of executing different playbooks altogether . The cluster can be verified with the following image:-
Also for testing launching a pod using master:-