In my previous blog https://bit.ly/4kcvuzf, we explored the quick seamless deployment of various Kubernetes components like Persistent Volumes (PV), Persistent Volume Claims (PVC), Storage Classes (SC), and Network File Systems (NFS). This foundational setup created a robust storage layer that supports our workloads. Alongside these configurations, we instantiated NGINX pods with NodePorts, allowing us to access our deployed applications effortlessly from outside the cluster.The magic of Kubernetes lies in its orchestration capabilities; by leveraging these resources effectively, we ensured that our infrastructure is not only resilient but also scalable. The NodePort service exposed NGINX directly to external traffic, enabling immediate interaction with users while managing load balancing under the hood. With each step carefully orchestrated using YAML manifests and kubectl commands, this process showcased how easily one can navigate through complex deployments without sacrificing performance or reliability.As we delve deeper into KubeVirt next, we’ll uncover how it integrates seamlessly into this dynamic ecosystem. Imagine running virtual machines alongside your containerized applications—all within a unified platform—enhancing flexibility and resource utilization like never before. Following are the steps in this episode.
Create NFS StorageClass, PV, PVC and kubevirt namespace (Already created)
Deploy KubeVirt Operator ,CRs
Install Virtctl CLI (optional but helpful)
Deploy a VM using NFS PVC
Access the VM
.
[root@kubemaster ~]# kubectl apply -f https://github.com/kubevirt/kubevirt/releases/latest/download/kubevirt-operator.yaml
Warning: resource namespaces/kubevirt is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/kubevirt configured
Warning: resource customresourcedefinitions/kubevirts.kubevirt.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io configured
[root@kubemaster ~]# export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
Warning: resource kubevirts/kubevirt is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
kubevirt.kubevirt.io/kubevirt configured
Cirros VM Manifest
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: test-vm
spec:
running: true
template:
metadata:
labels:
kubevirt.io/domain: test-vm
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
resources:
requests:
memory: 512Mi
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
password: virt
chpasswd: { expire: False }
[root@kubemaster ~]# kubectl apply -f vm-cirros.yaml
Warning: spec.running is deprecated, please use spec.runStrategy instead.
virtualmachine.kubevirt.io/test-vm created
[root@kubemaster ~]# virtctl console test-vm
Internal error occurred: dialing virt-handler: dial tcp :8186: connect: connection refused
[root@kubemaster ~]# kubectl get pods -n kubevirt -o wide | grep virt-handler
virt-handler-7npsr 1/1 Running 0 2m48s 10.244.127.116 kubeworker1.ranjeetbadhe.com
virt-handler-ttl6l 0/1 Running 0 67s 10.244.50.153 kubeworker2.ranjeetbadhe.com
[root@kubemaster ~]# kubectl get svc -n kubevirt
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubevirt-operator-webhook ClusterIP 10.103.205.229 443/TCP 140d
kubevirt-prometheus-metrics ClusterIP None 443/TCP 140d
virt-api ClusterIP 10.111.179.205 443/TCP 140d
virt-exportproxy ClusterIP 10.106.89.51 443/TCP 140d
[root@kubemaster ~]# virtctl version
Client Version: version.Info{GitVersion:"v1.5.0-beta.0", GitCommit:"95c3d22f3a0c548f4d46441bb2e3e18eee60daed", GitTreeState:"clean", BuildDate:"2025-01-15T21:10:48Z", GoVersion:"go1.22.10 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v1.5.0-beta.0", GitCommit:"95c3d22f3a0c548f4d46441bb2e3e18eee60daed", GitTreeState:"clean", BuildDate:"2025-01-15T22:31:14Z", GoVersion:"go1.22.10 X:nocoverageredesign", Compiler:"gc", Platform:"linux/amd64"}
[root@kubemaster ~]# kubectl get kubevirt -n kubevirt -o jsonpath="{.items[0].spec.imageTag}"
[root@kubemaster ~]# kubectl get pods -n kubevirt -o wide | grep virt-handler
virt-handler-7npsr 1/1 Running 0 5m47s 10.244.127.116 kubeworker1.ranjeetbadhe.com
virt-handler-ttl6l 1/1 Running 0 4m6s 10.244.50.153 kubeworker2.ranjeetbadhe.com
[root@kubemaster ~]# virtctl console test-vm
Successfully connected to test-vm console. The escape sequence is ^]
login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
test-vm login: cirros
Password:
$ ip -4 a
1: lo: mtu 65536 qdisc noqueue qlen 1
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: mtu 1450 qdisc pfifo_fast qlen 1000
inet 10.244.50.159/32 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
$ [ 833.931074] random: nonblocking pool is initialized
$ ip -4 a
1: lo: mtu 65536 qdisc noqueue qlen 1
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: mtu 1450 qdisc pfifo_fast qlen 1000
inet 10.244.50.159/32 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1): 56 data bytes
64 bytes from 192.168.0.1: seq=0 ttl=63 time=1.737 ms
64 bytes from 192.168.0.1: seq=1 ttl=63 time=1.149 ms
^C
--- 192.168.0.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.149/1.443/1.737 ms
$ ping cisco.com
PING cisco.com (72.163.4.185): 56 data bytes
64 bytes from 72.163.4.185: seq=0 ttl=43 time=239.047 ms
64 bytes from 72.163.4.185: seq=1 ttl=43 time=239.053 ms
64 bytes from 72.163.4.185: seq=2 ttl=43 time=239.560 ms
^C
--- cisco.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 239.047/239.220/239.560 ms