A hands-on walkthrough of installing Multus with MACVLAN, types of CNI, how MACVLAN works, and practical demo outputs.
Contents
- My Lab Setup
- Installing Multus with MACVLAN
- Types of CNI Plugins
- How MACVLAN Works
- Static IPs Across Nodes
- Multus/NAD Error & Fix
- Creating the NetworkAttachmentDefinition
- Deploying Pods with MACVLAN
- Verification
- Benefits & Drawbacks of MACVLAN
My Lab Setup
I have a single master node and two worker nodes. The master manages the cluster, while worker nodes host workloads. This setup allows me to test Multus CNI with MACVLAN driver in a real-world scenario.
Installing Multus with MACVLAN
Multus allows Pods to attach to more than one network. In this demo:
- Deployed Multus as a DaemonSet across all nodes.
- Created a
macvlan
NAD binding Pods to physical NICens224
. - Assigned static IPs to Pods for predictable addressing.
Types of CNI Plugins
Driver | Key Use Case |
---|---|
Bridge | Simple Layer-2 networking |
macvlan | Each Pod gets its own MAC on the LAN |
ipvlan | Scale without introducing many MAC addresses |
SR-IOV | High-speed, hardware-accelerated networking |
host-device | Directly use a host NIC |
ptp | Point-to-point connectivity |
Overlay (Flannel/Calico/OVN) | Cross-node cluster networking |
How MACVLAN Works
The MACVLAN driver creates virtual interfaces on top of a physical NIC. Each Pod:
- Gets its own MAC and IP address
- Appears as a unique host on the LAN
This provides direct Layer-2 connectivity to external networks, bypassing overlay encapsulation.
Static IPs Across Nodes
In my setup:
- Pod on Worker-1:
192.168.20.100
- Pod on Worker-2:
192.168.20.200
Both Pods can ping each other successfully, providing cross-node Layer-2 connectivity.
Multus/NAD Error & Fix with CRD
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster.ranjeetbadhe.com Ready control-plane 220d v1.29.13
kubeworker1.ranjeetbadhe.com Ready worker 220d v1.29.13
kubeworker2.ranjeetbadhe.com Ready worker 220d v1.29.13
[root@kubemaster ~]# kubectl apply -f pod-create-multus-node2.yaml
error: resource mapping not found for name: "macvlan-lan-static" namespace: "default" from "pod-create-multus-node2.yaml": no matches for kind "NetworkAttachmentDefinition" in version "k8s.cni.cncf.io/v1"
ensure CRDs are installed first
After installing Multus with DaemonSet, the CRDs and DaemonSet become available:
[root@kubemaster ~]# kubectl get crd network-attachment-definitions.k8s.cni.cncf.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "network-attachment-definitions.k8s.cni.cncf.io" not found
[root@kubemaster ~]# kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-daemon-config created
daemonset.apps/kube-multus-ds created
[root@kubemaster ~]# kubectl get crd network-attachment-definitions.k8s.cni.cncf.io
NAME CREATED AT
network-attachment-definitions.k8s.cni.cncf.io 2025-09-03T18:22:34Z
[root@kubemaster ~]# kubectl get ds -n kube-system | grep multus
kube-multus-ds 3 3 0 3 0 <none> 42s
Creating the NetworkAttachmentDefinition For Auto & Static MAC
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-lan-static
namespace: default
spec:
config: |
{
"cniVersion": "0.4.0",
"type": "macvlan",
"master": "ens224",
"mode": "bridge",
"mtu": 1500,
"ipam": { "type": "static" }
}
-----------------------------------------------------------------------------------------------
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-static-mac
namespace: default
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "ens192",
"mode": "bridge",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.20.201/24",
"gateway": "192.168.20.1"
}
]
},
"mac": "02:1A:11:00:00:0A"
}'
-----------------------------------------------------------------------------------------------
[root@kubemaster ~]# kubectl get net-attach-def -A
NAMESPACE NAME AGE
default macvlan-ens224-ext 2d3h
default macvlan-lan-static 2d3h
default macvlan-static-mac-manual1 5m53s
Deploying Pods with MACVLAN Auto & Static
Example Pod on Worker-1 & Worker2:
apiVersion: v1
kind: Pod
metadata:
name: network-troubleshooter-w1
annotations:
k8s.v1.cni.cncf.io/networks: |
[{
"name": "macvlan-lan-static",
"interface": "net1",
"ips": ["192.168.20.100/24"],
"gateways": ["192.168.20.1"]
}]
spec:
nodeSelector:
kubernetes.io/hostname: kubeworker1.ranjeetbadhe.com
containers:
- name: netshoot
image: nicolaka/netshoot:latest
command: ["sleep", "infinity"]
securityContext:
privileged: true
[root@kubemaster ~]# cat pod-create-multus-node2.yaml
apiVersion: v1
kind: Pod
metadata:
name: network-troubleshooter-w2
namespace: default
annotations:
k8s.v1.cni.cncf.io/networks: |
[
{
"name": "macvlan-lan-static",
"interface": "net1",
"ips": ["192.168.20.200/24"],
"gateways": ["192.168.20.1"]
}
]
spec:
nodeSelector:
kubernetes.io/hostname: kubeworker2.ranjeetbadhe.com
containers:
- name: netshoot
image: nicolaka/netshoot:latest
command: ["sleep", "infinity"]
securityContext:
privileged: true
[root@kubemaster ~]# kubectl apply -f pod-create-multus-node1.yaml
pod/network-troubleshooter-w1 created
[root@kubemaster ~]# kubectl apply -f pod-create-multus-node2.yaml
pod/network-troubleshooter-w2 created
[root@kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-v1-56d97bc7c6-rjh5q 1/1 Running 3 (6h20m ago) 6d
my-app-v2-6dcf888588-687ng 1/1 Running 3 (6h20m ago) 6d
network-troubleshooter-w1 1/1 Running 0 3h20m
network-troubleshooter-w2 1/1 Running 0 3h20m
nfs-client-provisioner-76c8c74464-rknqt 1/1 Running 44 (5h19m ago) 218d
nfs-test-pod 1/1 Running 47 (19m ago) 6d9h
nginx-7d59b68bdd-2wxmk 1/1 Running 4 (6h20m ago) 6d
nginx-7d59b68bdd-bcnng 1/1 Running 7 (6h20m ago) 67d
virt-launcher-rocky-vm-q8zjm 2/2 Running 0 6h18m
virt-launcher-test-vm-9dxnb 3/3 Running 0 6h18m
virt-launcher-testvm-mgbk4 3/3 Running 0 6h18m
Verification
From Pod on Worker-2 (192.168.20.200
):
[root@kubemaster ~]# kubectl exec -it -n default network-troubleshooter-w2 -- sh
~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 6e:bf:b6:e2:53:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.50.178/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::6cbf:b6ff:fee2:539a/64 scope link
valid_lft forever preferred_lft forever
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 5a:d9:fd:1b:71:4e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.20.200/24 brd 192.168.20.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::58d9:fdff:fe1b:714e/64 scope link
valid_lft forever preferred_lft forever
~ # ping 192.168.20.100
PING 192.168.20.100 (192.168.20.100) 56(84) bytes of data.
64 bytes from 192.168.20.100: icmp_seq=1 ttl=64 time=0.185 ms
64 bytes from 192.168.20.100: icmp_seq=2 ttl=64 time=0.157 ms
--- 192.168.20.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.157/0.171/0.185/0.014 ms
~ # ping 10.244.127.135
PING 10.244.127.135 (10.244.127.135) 56(84) bytes of data.
64 bytes from 10.244.127.135: icmp_seq=1 ttl=62 time=0.347 ms
64 bytes from 10.244.127.135: icmp_seq=2 ttl=62 time=0.186 ms
--- 10.244.127.135 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.186/0.266/0.347/0.080 ms
From Pod on Worker-1 (192.168.20.100
): Pods can ping each other on both interfaces
[root@kubemaster ~]# kubectl exec -it -n default network-troubleshooter-w1 -- sh
~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if77: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 3a:62:65:ab:af:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.127.135/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3862:65ff:feab:afc5/64 scope link
valid_lft forever preferred_lft forever
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether fe:1b:4b:ea:f4:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.20.100/24 brd 192.168.20.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::fc1b:4bff:feea:f444/64 scope link
valid_lft forever preferred_lft forever
~ # ping 192.168.20.200
PING 192.168.20.200 (192.168.20.200) 56(84) bytes of data.
64 bytes from 192.168.20.200: icmp_seq=1 ttl=64 time=0.146 ms
64 bytes from 192.168.20.200: icmp_seq=2 ttl=64 time=0.183 ms
--- 192.168.20.200 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1036ms
rtt min/avg/max/mdev = 0.146/0.164/0.183/0.018 ms
~ # ping 10.244.50.178
PING 10.244.50.178 (10.244.50.178) 56(84) bytes of data.
64 bytes from 10.244.50.178: icmp_seq=1 ttl=62 time=0.234 ms
64 bytes from 10.244.50.178: icmp_seq=2 ttl=62 time=0.191 ms
--- 10.244.50.178 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1026ms
rtt min/avg/max/mdev = 0.191/0.212/0.234/0.021 ms
Benefits & Drawbacks of MACVLAN
Benefits
- Direct LAN integration without NAT
- Static, predictable IPs
- High performance with minimal overhead
- Ethernet-level isolation
Drawbacks
- Switch port MAC limits may cause issues
- Same-host Pod-to-Pod traffic restrictions in some modes
- Static IP management adds complexity
- Not supported in many public cloud environments