Welcome to the first installment of our multi-part deep-dive into Kubernetes internals, with a dedicated emphasis on its networking architecture.
This technical series is designed primarily for our presales engineers and junior team members, equipping them with the foundational understanding required to confidently engage in solution design and technical discussions with telecom operator clients. The goal is to unpack the inner workings of Kubernetes—focusing on how its networking model can be optimized to meet the demanding performance, reliability, and scalability needs of production-grade telecom environments, including 4G/5G Core networks and Radio Access Networks (RAN).
In Part 1, we’ll begin by establishing foundational concepts—Pods, Containers, and Services—laying the groundwork for more advanced topics.
In Part 2, I will explore major Container Network Interface (CNI) implementations, including Calico, Cilium, and OVN-Kubernetes. We will examine their architectural models, traffic flow mechanisms, and their suitability for high-performance, low-latency workloads that are typical in telecom and edge computing environments.
For those interested in a deeper dive into BGP-based routing with Calico, I have previously published a dedicated blog post, which can be accessed here.
Each entry in this series will feature:
- Real-world lab setups
- Step-by-step deployment walkthroughs
- Production-grade YAML manifests
These artifacts are derived from my personal home lab environment and are designed to help you replicate, validate, and extend the scenarios in your own Kubernetes clusters.
Upcoming topics include:
- Pod-to-pod and node-to-node communication patterns
- Network policy enforcement
- Kubernetes service discovery and exposure models
- Load balancing, ingress, and external traffic routing
- BGP-based routing in multi-node environments
- Integration with external physical and virtual telecom infrastructure
Stay tuned as we uncover the layers of Kubernetes networking—one protocol at a time.
1. Container IP
- Each container inside a pod technically has its own network namespace, but in Kubernetes, by default, containers in a pod share the same network namespace.
- Container IP is usually the same as Pod IP.
- If there are multiple containers in a pod (sidecars etc.), they all share the same IP address and communicate over localhost (127.0.0.1) between themselves.
Use multiple containers in a pod when they share a lifecycle, need the same node, or require shared storage or close communication. A common case is running a helper container alongside the main app for easy coordination via localhost or shared volumes. Three common ways to combine multiple containers in one pod are the sidecar, adapter, and ambassador patterns.

Multi-Container Pod Patterns in Kubernetes
Sidecar Pattern:
Runs a helper container alongside the main app to handle tasks like logging, syncing, or monitoring—isolating faults from the main app.- 🔁 Adapter Pattern:
Converts application output into a standard format for consistent monitoring or aggregation. - 🌐 Ambassador Pattern:
Acts as a proxy to connect internal containers to external services by routing traffic through a local port.
Is Container IP static?
- No, it is dynamic by default.
- When a pod is deleted and recreated (even with the same name), the container IP (i.e., pod IP) can change.
- If you need a fixed IP, you have to use special mechanisms like:
- StatefulSets (for predictable DNS names)
- Static IP allocations using CNI plugins (e.g., Calico IP pools)
- HostNetwork mode (not recommended usually)
2. Pod IP
- The Pod is the smallest deployable unit in Kubernetes.
- When a pod is created, the CNI (Container Network Interface) assigns it a Pod IP.
- This IP is:
- Unique within the Kubernetes cluster.
- Used by other pods for direct communication.
- Dynamically assigned unless configured otherwise.
- When a pod is rescheduled or restarted, its Pod IP usually changes.
3. Service IP
- A Service in Kubernetes provides a stable way to access a set of pods.
- Service IP (ClusterIP):
- Is a virtual IP assigned by Kubernetes itself.
- Static for the lifetime of the Service object (unless the service is deleted and recreated).
- Clients connect to the Service IP, and Kubernetes load-balances the traffic to the matching pod(s).
- Kubernetes uses iptables (older method) or IPVS (newer method) to map the Service IP to the actual Pod IPs internally.
Who assigns IP to a Pod?
- CNI Plugin (Container Network Interface plugin) assigns the Pod IP.
- Kubernetes itself does not assign Pod IPs — it asks the CNI plugin to do it.
- Example CNI plugins:
- Calico
- Flannel
- Cilium
- Weave Net
- The CNI plugin talks to the node’s networking stack and allocates an IP address to the pod from a node-specific subnet (e.g., 10.244.0.0/16 if using Flannel).
- Each worker node usually has a Pod CIDR range allocated, and pods get IPs from that range.
2. Who assigns IP to a Service?
- The Kubernetes API Server (specifically the kube-apiserver) assigns the Service IP.
- When you create a Service, Kubernetes reserves an IP address from a special internal Service CIDR range and assigns it to that Service.
- This IP is called the ClusterIP and is static for the lifetime of the service.
3. From where is the Cluster IP generated?
- The Cluster IP is picked from the Service CIDR range.
- The Service CIDR is configured at cluster setup time
In Kubernetes, Service IPs such as ClusterIP
are virtual IP addresses that do not correspond to any physical network interface like eth0
or eth1
. As a result, you cannot ping (ICMP) a Kubernetes Service IP. This is because ICMP (used by tools like ping
) requires a real host to receive and respond to echo requests. However, a Service IP is just a logical abstraction handled by kube-proxy using iptables
or IPVS
rules, and no actual pod or container directly listens on the Service IP.
From a technical standpoint, Kubernetes Services use Destination NAT (DNAT) to route TCP, UDP, or SCTP traffic to backend pods. ICMP, however, is a completely different protocol and is not processed or intercepted by kube-proxy. As a result, when you send an ICMP Echo Request (i.e., a ping) to a Service IP, the packet is simply dropped—no backend pod is aware of it, and no reply is generated. Consequently, you’ll see errors like “Destination Host Unreachable” or “Request Timed Out” when attempting to ping a ClusterIP address.