Skip to content

K8S Networking

← Back to all decks

27 cards — 🟢 4 easy | 🟡 4 medium | 🔴 4 hard

🟢 Easy (4)

1. What is the fundamental rule of the Kubernetes pod networking model?

Show answer Every pod gets its own IP address, and all pods can communicate with all other pods without NAT (flat network model).

Remember: K8s networking rule: every pod gets a unique IP. Pod-to-pod across nodes without NAT.

2. What are the four Kubernetes Service types?

Show answer ClusterIP (internal only, default), NodePort (static port on every node 30000-32767), LoadBalancer (provisions external LB), and ExternalName (DNS CNAME redirect, no proxying).

Remember: K8s networking rule: every pod gets a unique IP. Pod-to-pod across nodes without NAT.

3. What DNS record format does Kubernetes create for a ClusterIP Service?

Show answer ..svc.cluster.local — for example, backend-api.production.svc.cluster.local.

Remember: K8s DNS: `..svc.cluster.local`. CoreDNS runs in kube-system.

Under the hood: CoreDNS auto-discovers Services. Pod /etc/resolv.conf points to it.

Fun fact: CoreDNS replaced kube-dns in K8s 1.13 (2018). It is written in Go and uses a plugin architecture.

Remember: K8s DNS format: ..svc.cluster.local. Within the same namespace, just works thanks to search domains.

4. What is the role of a CNI plugin in Kubernetes?

Show answer The CNI (Container Network Interface) plugin is called by the kubelet to set up and tear down pod network namespaces, assigning IPs and configuring routes so pods can communicate across nodes.

Remember: CNI plugins: Calico (policy), Cilium (eBPF), Flannel (simple). "CCF."

Gotcha: No CNI = pods on different nodes can't talk. First install after cluster init.

🟡 Medium (4)

1. What is the key difference between kube-proxy iptables mode and IPVS mode?

Show answer iptables mode rewrites the full rule chain on every Service change (O(n) updates, slow at scale), while IPVS uses kernel-level hash tables for O(1) lookup performance and better scalability beyond ~5,000 Services.

Remember: K8s networking rule: every pod gets a unique IP. Pod-to-pod across nodes without NAT.

2. What is a headless Service and when is it used?

Show answer A headless Service has clusterIP: None and returns individual pod IPs directly via DNS instead of a single virtual IP. It is essential for StatefulSets where clients need to address specific pods (e.g., cassandra-0.cassandra.default.svc.cluster.local).

Remember: K8s networking rule: every pod gets a unique IP. Pod-to-pod across nodes without NAT.

3. What happens when you apply a NetworkPolicy with an empty podSelector and policyTypes: [Ingress] but no ingress rules?

Show answer It creates a default-deny-ingress policy that blocks ALL inbound traffic to every pod in the namespace, since the empty podSelector matches all pods and no ingress rules means nothing is allowed.

Remember: Ingress routes HTTP/HTTPS by hostname/path. L7 only. Use NodePort/LB for L4.

Gotcha: Without an Ingress Controller deployed, Ingress resources have no effect.

4. When writing a NetworkPolicy that restricts egress, why must you explicitly allow DNS traffic?

Show answer Without an explicit egress rule allowing UDP/TCP port 53, pods cannot resolve Service names via CoreDNS. This is the most common NetworkPolicy mistake — blocking egress silently breaks DNS-based service discovery.

Remember: K8s DNS: `..svc.cluster.local`. CoreDNS runs in kube-system.

Under the hood: CoreDNS auto-discovers Services. Pod /etc/resolv.conf points to it.

🔴 Hard (4)

1. A pod can reach a Service by ClusterIP but not by DNS name. What is the most likely cause and how do you diagnose it?

Show answer Most likely a DNS search domain issue in /etc/resolv.conf. Diagnose: kubectl exec -it pod -- cat /etc/resolv.conf to check nameserver and search domains, kubectl exec -it pod -- nslookup kubernetes.default to test basic DNS, then check CoreDNS pods with kubectl get pods -n kube-system -l k8s-app=kube-dns.

Remember: K8s DNS: `..svc.cluster.local`. CoreDNS runs in kube-system.

Under the hood: CoreDNS auto-discovers Services. Pod /etc/resolv.conf points to it.

2. How do you capture network traffic inside a running pod without modifying its image?

Show answer Use an ephemeral debug container (K8s 1.23+): kubectl debug -it problem-pod --image=nicolaka/netshoot --target=app-container -- tcpdump -i eth0 -nn port 8080. This attaches a debug container to the pod's network namespace without restarting the pod.

Remember: K8s networking rule: every pod gets a unique IP. Pod-to-pod across nodes without NAT.

3. An Ingress resource shows correct rules but external traffic returns 404. What are the three most likely causes?

Show answer 1) Ingress controller not installed or not running. 2) ingressClassName does not match the controller's IngressClass. 3) Backend Service has no ready endpoints — check kubectl get endpoints to verify pods are running and selected.

Remember: Ingress routes HTTP/HTTPS by hostname/path. L7 only. Use NodePort/LB for L4.

Gotcha: Without an Ingress Controller deployed, Ingress resources have no effect.

4. How does Cilium differ from traditional CNI plugins like Flannel?

Show answer Cilium uses eBPF programs in the Linux kernel for networking, load balancing, and security. It can replace kube-proxy entirely and provides deep network flow visibility, while Flannel only provides basic VXLAN overlay with no network policy support.

Remember: CNI plugins: Calico (policy), Cilium (eBPF), Flannel (simple). "CCF."

Gotcha: No CNI = pods on different nodes can't talk. First install after cluster init.