Deploy Kubernetes Using Ansible

In this blog we will deploy Kubernetes 1.23.2 using ansible playbook. Use these playbooks and get your Kubernetes deployment complete in 1 hour. All you need to do is change the ip address and the hostnames as per your infrastructure. I have created 2 playbooks.

  1. Install.yml
  2. k8s-workers.yml

Playbook install.yml will disable Selinux, Swap fstab, disable firewall, enable netfiler, configure Kubernetes. repo, Install Kubernetes, Install plugins cri-o.

Playbook k8s-workers.yml will add worker nodes to the cluster.Below mentioned are the popular Container runtimes and being used mainly

Docker

CRI-O

Containerd

For this cluster, we are going to use CRI-O runtimes.

Here are my ansible files

[root@ansible ranjeet]# ll
total 52
-rw-r--r--. 1 root root 19993 Feb 26 21:49 ansible.cfg
-rw-r--r--. 1 root root 167 Feb 26 21:48 hosts
-rw-r--r--. 1 root root 5399 Feb 26 21:43 install.yml
-rw-r--r--. 1 root root 567 Feb 26 21:41 k8s-workers.yml
-rw-r--r--. 1 root root 571 Feb 26 21:41 README.md

My ansible.cfg file. I have changed the inventory file and disabled the host key checking.

inventory = /root/ranjeet/hosts
host_key_checking = False

In my ansible hosts file I have made 3 groups namely masters which has Kubernetes master ,workers group with all the worker nodes and ALL group with all the nodes.

[root@ansible ranjeet]# cat hosts
[masters]
kubernetesM.ranjeetbadhe.com
[workers]
n1.ranjeetbadhe.com
n2.ranjeetbadhe.com
[all]
kubernetesM.ranjeetbadhe.com
n1.ranjeetbadhe.com
n2.ranjeetbadhe.com

All of my Linux nodes are configured with the proper linux host names

[root@ansible ranjeet]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.50 ansible ansible.ranjeetbadhe.com
192.168.0.41 n1 n1.ranjeetbadhe.com
192.168.0.42 n2 n2.ranjeetbadhe.com
192.168.0.51 kubernetesM kubernetesM.ranjeetbadhe.com
192.168.0.55 nfs nfs.ranjeetbadhe.com

I perform ansible ping test .It must be successful so that Ansible engine can run the playbook and push the configuration.

[root@ansible ranjeet]# ansible all -m ping
kubernetesM.ranjeetbadhe.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
n2.ranjeetbadhe.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
n1.ranjeetbadhe.com | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

Feel free to copy this playbook and start your deployment.

[root@ansible-ranjeet]# cat install.yml
- hosts: all
tasks:
- name: Disable SELinux
selinux:
state: disabled
- name: Disable SWAP
shell: swapoff -a
- name: Create disable swap fstab
shell: line=$(grep -n -m 1 swap /etc/fstab | cut -d ":" -f 1) && sed -e "${line}s/^/#/" /etc/fstab > /etc/fstab.bk
- name: Disabled Swap
shell: cp /etc/fstab.bk /etc/fstab
- name: Active netfiter
shell: modprobe br_netfilter
- name: Test netfilter config
shell: if grep -q "^net.ipv4.ip_forward = 1" /etc/sysctl.conf; then echo false; else echo true; fi
register: test_grep
- name: enable netfiler
lineinfile:
dest: /etc/sysctl.conf
line: net.ipv4.ip_forward = 1
when: test_grep.stdout == "true"
- name: disable firewall
shell: systemctl stop firewalld && systemctl disable firewalld && systemctl mask --now firewalld
- name: Add epel-release repo and utils
yum:
name: ['epel-release','yum-utils','device-mapper-persistent-data','lvm2','wget']
- name: Creating a repository file for Kubernetes
file:
path: /etc/yum.repos.d/kubernetes.repo
state: touch
- name: Adding repository details in Kubernetes repo file.
blockinfile:
path: /etc/yum.repos.d/kubernetes.repo
block: |
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
- name: Install kubernetes
yum:
name:
- "kubeadm-1.23.2-0"
- "kubelet-1.23.2-0"
- "kubectl-1.23.2-0"
state: present
- name: Enable br_netfilter
shell: |
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
- name: Enable bridge nf call k8s.conf
shell: |
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
- name: sysctl
shell: sudo sysctl --system
- name: Download containers-common
shell: wget https://rpmfind.net/linux/centos/7.9.2009/extras/x86_64/Packages/containers-common-0.1.40-11.el7_8.x86_64.rpm
- name: Install package.
yum:
name: containers-common-0.1.40-11.el7_8.x86_64.rpm
state: present
- name: enable Kubelet
shell: systemctl enable --now kubelet
- name: Adding 10-kubeadm.conf
blockinfile:
path: /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
block: |
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
## The following line to be added for CRI-O
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS
- name: Adding repository details in Kubernetes-devlevel repo file.
shell: |
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes-devlevel.repo
[devel_kubic_libcontainers_stable]
name=Stable Releases of Upstream github.com/containers packages (CentOS_7)
type=rpm-md
baseurl=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/
gpgcheck=1
gpgkey=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/repodata/repomd.xml.key
enabled=1
EOF
- name: Download containers-common rpm
shell: wget https://rpmfind.net/linux/centos/7.9.2009/extras/x86_64/Packages/containers-common-0.1.40-11.el7_8.x86_64.rpm
- name: Install containers-common.
yum:
name: ./containers-common-0.1.40-11.el7_8.x86_64.rpm
state: present
- name: Install cri-o repos
shell: |
VERSION=1.23
OS=CentOS_7
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
- name: Install cri-o
yum:
name:
- "cri-o "
- "cri-tools"
state: present
- name: Start cri-o
shell: |
systemctl daemon-reload
systemctl enable crio --now
systemctl enable kubelet --now
- name: Start kubelet
shell: systemctl enable kubelet
- name: reboot
reboot:
[root@ansible ranjeet]# ansible-playbook install.yml
PLAY [all] ********************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
ok: [n1.ranjeetbadhe.com]
ok: [n2.ranjeetbadhe.com]
ok: [kubernetesM.ranjeetbadhe.com]
TASK [Disable SELinux] ********************************************************************************************************
[WARNING]: SELinux state temporarily changed from 'enforcing' to 'permissive'. State change will take effect next reboot.
changed: [n1.ranjeetbadhe.com]
changed: [kubernetesM.ranjeetbadhe.com]
changed: [n2.ranjeetbadhe.com]
Execute the worker playbook so that worker node joins the cluster
[root@ansible ranjeet]# ansible-playbook k8s-workers.yml
PLAY [kubernetesM.ranjeetbadhe.com] *******************************************************************************************
TASK [get join command] *******************************************************************************************************
changed: [kubernetesM.ranjeetbadhe.com]
TASK [set join command] *******************************************************************************************************
ok: [kubernetesM.ranjeetbadhe.com]
PLAY [workers] ****************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************
ok: [n1.ranjeetbadhe.com]
ok: [n2.ranjeetbadhe.com]

TASK [join cluster] ***********************************************************************************************************
changed: [n2.ranjeetbadhe.com]
changed: [n1.ranjeetbadhe.com]
PLAY RECAP ********************************************************************************************************************
kubernetesM.ranjeetbadhe.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
n1.ranjeetbadhe.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
n2.ranjeetbadhe.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Complete the following steps on Master node

kubeadm init --apiserver-advertise-address=192.168.0.51 --pod-network-cidr=10.244.0.0/16
[root@kubernetesM ~]# mkdir -p $HOME/.kube
[root@kubernetesM ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kubernetesM ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kubernetesM ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@kubernetesM ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetesm.ranjeetbadhe.com Ready control-plane,master 19m v1.23.2
n1.ranjeetbadhe.com NotReady <none> 6s v1.23.2
n2.ranjeetbadhe.com NotReady <none> 6s v1.23.2

Till now we have not installed plugins. These are most popular plugins are:

  • Flannel
  • Calico
  • Weave
  • Cilium

We are using Flannel plugin here. You can deploy any plugins of your choice. Once plugin is installed we deploy sample nginx pod to test our Kubernetes platform.

[root@kubernetesM ~]# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-99mtg 1/1 Running 0 27m
kube-system coredns-64897985d-rs2pk 1/1 Running 0 27m
kube-system etcd-kubernetesm.ranjeetbadhe.com 1/1 Running 0 27m
kube-system kube-apiserver-kubernetesm.ranjeetbadhe.com 1/1 Running 0 27m
kube-system kube-controller-manager-kubernetesm.ranjeetbadhe.com 1/1 Running 0 27m
kube-system kube-proxy-5n8ss 1/1 Running 0 27m
kube-system kube-proxy-hvgxp 1/1 Running 0 8m59s
kube-system kube-proxy-qlw8h 1/1 Running 0 8m59s
kube-system kube-scheduler-kubernetesm.ranjeetbadhe.com 1/1 Running 0 27m


[root@kubernetesM ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@kubernetesM ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-9jnw4 0/1 Init:0/2 0 8s
kube-flannel kube-flannel-ds-kq2kb 0/1 Init:0/2 0 8s
kube-flannel kube-flannel-ds-m76w2 0/1 Init:0/2 0 8s
kube-system coredns-64897985d-99mtg 1/1 Running 0 30m
kube-system coredns-64897985d-rs2pk 1/1 Running 0 30m
kube-system etcd-kubernetesm.ranjeetbadhe.com 1/1 Running 0 30m
kube-system kube-apiserver-kubernetesm.ranjeetbadhe.com 1/1 Running 0 30m
kube-system kube-controller-manager-kubernetesm.ranjeetbadhe.com 1/1 Running 0 30m
kube-system kube-proxy-5n8ss 1/1 Running 0 30m
kube-system kube-proxy-hvgxp 1/1 Running 0 11m
kube-system kube-proxy-qlw8h 1/1 Running 0 11m
kube-system kube-scheduler-kubernetesm.ranjeetbadhe.com 1/1 Running 0 30m
[root@kubernetesM ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-9jnw4 0/1 PodInitializing 0 25s
kube-flannel kube-flannel-ds-kq2kb 0/1 Init:1/2 0 25s
kube-flannel kube-flannel-ds-m76w2 0/1 Init:1/2 0 25s
kube-system coredns-64897985d-99mtg 1/1 Running 0 30m
kube-system coredns-64897985d-rs2pk 1/1 Running 0 30m
kube-system etcd-kubernetesm.ranjeetbadhe.com 1/1 Running 0 31m
kube-system kube-apiserver-kubernetesm.ranjeetbadhe.com 1/1 Running 0 31m
kube-system kube-controller-manager-kubernetesm.ranjeetbadhe.com 1/1 Running 0 31m
kube-system kube-proxy-5n8ss 1/1 Running 0 30m
kube-system kube-proxy-hvgxp 1/1 Running 0 12m
kube-system kube-proxy-qlw8h 1/1 Running 0 12m
kube-system kube-scheduler-kubernetesm.ranjeetbadhe.com 1/1 Running 0 31m
We are done with deployment now

[root@kubernetesM ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetesm.ranjeetbadhe.com Ready control-plane,master 44m v1.23.2
n1.ranjeetbadhe.com Ready <none> 25m v1.23.2
n2.ranjeetbadhe.com Ready <none> 25m v1.23.2
Lets deploy our first pod , the popular nginx

[root@kubernetesM ~]# cat nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
[root@kubernetesM ~]# kubectl create -f nginx.yml
pod/nginx created
[root@kubernetesM ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 6s
[root@kubernetesM ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 40s
[root@kubernetesM ~]# kubectl describe pods nginx
Name: nginx
Namespace: default
Priority: 0
Node: n2.ranjeetbadhe.com/192.168.0.42
Start Time: Sun, 26 Feb 2023 23:14:34 +0530
Labels: name=nginx
Annotations: <none>
Status: Running
IP: 10.244.2.2
IPs:
IP: 10.244.2.2
Containers:
nginx:
Container ID: cri-o://fe2fc29e6d93ea40fec45d6ba2dbd53b78e3432eaef333a80281c015e15c9ec0
Image: nginx
Image ID: docker.io/library/nginx@sha256:6650513efd1d27c1f8a5351cbd33edf85cc7e0d9d0fcb4ffb23d8fa89b601ba8
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 26 Feb 2023 23:14:53 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5lxk9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5lxk9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30s default-scheduler Successfully assigned default/nginx to n2.ranjeetbadhe.com
Normal Pulling 30s kubelet Pulling image "nginx"
Normal Pulled 11s kubelet Successfully pulled image "nginx" in 18.965923667s
Normal Created 11s kubelet Created container nginx
Normal Started 10s kubelet Started container nginx
[root@kubernetesM ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetesm.ranjeetbadhe.com Ready control-plane,master 108m v1.23.2 192.168.0.51 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 cri-o://1.23.5
n1.ranjeetbadhe.com Ready <none> 89m v1.23.2 192.168.0.41 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 cri-o://1.23.5
n2.ranjeetbadhe.com Ready <none> 89m v1.23.2 192.168.0.42 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 cri-o://1.23.5
[root@kubernetesM ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 77m 10.244.2.2 n2.ranjeetbadhe.com <none> <none>

Tips:

In case you get token expiration message while joining cluster from worker node

kubeadm join 192.168.0.151:6443 --token hgsy5e.ih09izwlxhvhc2kw --discovery-token-ca-cert-hash sha256:d6cd86a7281ca6becb6ce06861e3458477eedfc67b98c571f92dab44f1d1bf2e
[preflight] Running pre-flight checks


error execution phase preflight: couldn't validate the identity of the API Server: Get "https://192.168.0.151:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": x509: certificate has expired or is not yet valid: current time 2023-09-01T12:30:03+05:30 is before 2023-09-01T07:22:05Z

Take thee steps

 kubectl get nodes
 kubeadm certs renew all
 kubeadm certs check-expiration
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 kubeadm token create --print-join-command
 kubeadm join 192.168.0.151:6443 --token loxts7.35rvi0hxo3xarp37 --discovery- token-ca-cert-hashsha256:d6cd86a7281ca6becb6ce06861e3458477eedfc67b98c571f92dab44f1d1bf2e

Please feel free to contact me if you need any further information or support for your Kubernetes deployment.

Thank you for taking the time to read my blog.

Leave a Reply

Your email address will not be published.