Cilium¶
16 cards — 🟢 3 easy | 🟡 4 medium | 🔴 3 hard
🟢 Easy (3)¶
1. What is Cilium and how does it differ from traditional CNI plugins?
Show answer
Cilium is a Kubernetes CNI plugin that uses eBPF instead of iptables for networking, security, and observability. Traditional CNIs (Calico iptables mode, Flannel) use iptables/IPVS rules that scale poorly. Cilium programs eBPF bytecode directly in the kernel, providing better performance and visibility at scale.Remember: Cilium = eBPF-based networking, security, and observability for Kubernetes. Replaces iptables with eBPF programs in the kernel.
Under the hood: eBPF programs run in the Linux kernel, intercepting network packets at the socket level. This is faster than iptables rule chains.
2. What is a Cilium identity and how is it assigned?
Show answer
Cilium assigns numeric identities to endpoints based on their Kubernetes labels. All pods with the same security-relevant labels share the same identity. Network policies reference these identities instead of IP addresses, so policies survive pod restarts and IP changes. Identity allocation is managed by the Cilium operator via a KVStore or CRDs.Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.
3. What are the prerequisites for running Cilium?
Show answer
Linux kernel 4.19+ (5.10+ recommended for full features). If replacing kube-proxy, start the cluster without it (--skip-kube-proxy in kubeadm). Cilium can run alongside other CNIs in chaining mode but is best as the primary CNI. Requires a KVStore (external etcd or CRD-based, which is default since Cilium 1.13).Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.
🟡 Medium (4)¶
1. How do Cilium network policies differ from standard Kubernetes NetworkPolicy?
Show answer
CiliumNetworkPolicy extends standard NetworkPolicy with L7 (HTTP, gRPC, Kafka) filtering, DNS-aware rules (allow traffic only to specific FQDNs), identity-based policies (instead of IP-based), and the ability to inspect and filter by HTTP path, method, and headers. Standard NetworkPolicy only operates at L3/L4.Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.
2. What is Hubble and what does it provide?
Show answer
Hubble is Cilium's built-in observability platform. It provides network flow visibility (source, destination, protocol, verdict), service dependency maps, DNS query monitoring, HTTP request/response metrics, and real-time flow filtering. Access via hubble CLI, Hubble UI (web dashboard), or Prometheus metrics export.Remember: Hubble = Cilium's observability platform. Provides service dependency maps, flow logs, and DNS visibility. Think 'Wireshark for Kubernetes.'
3. How does Cilium replace kube-proxy?
Show answer
Cilium can run in kube-proxy replacement mode, handling Service load balancing entirely in eBPF. Benefits: no iptables rules for Services (better performance at scale), Maglev consistent hashing for backends, socket-level load balancing (skipping NAT for local traffic), and DSR (Direct Server Return) mode for reduced latency.Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.
4. How do you troubleshoot Cilium connectivity issues?
Show answer
1) cilium status — check agent health. 2) cilium endpoint list — verify endpoints are in "ready" state. 3) cilium policy get — check if policies are loaded. 4) hubble observe — watch live traffic flows and check for drops. 5) cilium monitor — low-level packet tracing. 6) Check cilium-agent logs for errors.Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.
🔴 Hard (3)¶
1. How does Cilium's eBPF datapath handle a packet?
Show answer
When a packet arrives, Cilium's eBPF program at the tc (traffic control) hook: 1) Looks up the source identity from the identity map. 2) Checks policy map for allowed identities. 3) Performs L3/L4 filtering. 4) Optionally inspects L7 via a userspace proxy (Envoy). 5) Forwards or drops. All L3/L4 decisions happen in kernel space — no context switches.Remember: eBPF = extended Berkeley Packet Filter. Think 'programmable kernel hooks.' Originally for packet filtering, now used for tracing, security, and networking.
2. What is Cilium Cluster Mesh and when would you use it?
Show answer
Cluster Mesh connects multiple Kubernetes clusters so pods can communicate across clusters using pod IPs. Use cases: multi-cluster service discovery, cross-cluster load balancing, shared network policies, disaster recovery. It works by synchronizing identities and service endpoints across clusters via etcd or CRDs.Remember: Cilium = eBPF-based networking, security, and observability for Kubernetes. Replaces iptables with eBPF programs in the kernel.
Under the hood: eBPF programs run in the Linux kernel, intercepting network packets at the socket level. This is faster than iptables rule chains.
3. How does Cilium provide transparent encryption?
Show answer
Cilium supports WireGuard and IPsec for transparent pod-to-pod encryption without application changes. WireGuard mode is simpler (one key pair per node), lower overhead, and recommended. IPsec mode supports more compliance frameworks. Both encrypt traffic between nodes while keeping intra-node traffic unencrypted (already isolated by cgroup).Remember: Cilium's key advantage is eBPF — programs run in the kernel, avoiding the overhead of iptables rule chains. This matters at scale: iptables with 10K+ rules is slow; eBPF is constant time.
See also: Cilium documentation at docs.cilium.io. The project is a CNCF graduated project, indicating production maturity.