Cilium — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about Cilium and eBPF-based networking.
Cilium was the first CNI to bypass iptables entirely¶
When Cilium launched, every other Kubernetes CNI plugin relied on iptables for network policy enforcement and service routing. Cilium replaced all of that with eBPF programs attached directly in the kernel's networking stack. On large clusters with thousands of services, this eliminated the O(n) iptables rule scanning that caused measurable latency increases as cluster size grew.
The name "Cilium" comes from biology¶
Cilia are hair-like structures on cells that sense the environment and control what passes through. The Cilium project chose the name because the software acts like cilia on each node — sensing network traffic and controlling what gets through based on identity-aware policies. Thomas Graf, Cilium's creator, was a long-time Linux kernel networking developer before founding Isovalent.
eBPF was originally just "extended BPF" for packet filtering¶
BPF (Berkeley Packet Filter) was created in 1992 by Steven McCanne and Van Jacobson at Lawrence Berkeley National Laboratory for tcpdump. "Extended BPF" was added to the Linux kernel in 2014 by Alexei Starovoitov, transforming it from a simple packet filter into a general-purpose in-kernel virtual machine. Cilium was one of the first major projects to exploit eBPF's full potential beyond packet capture.
Cilium can enforce policies based on DNS names, not just IPs¶
Traditional network policies work on IP addresses, but in dynamic environments IPs change constantly. Cilium's DNS-aware policy engine intercepts DNS responses, learns the IP-to-FQDN mapping, and then enforces policies based on the domain name. This means you can write a policy like "allow traffic to api.stripe.com" and Cilium handles the rest, even when Stripe's IPs rotate.
Isovalent was acquired by Cisco for reportedly $1B+¶
Isovalent, the company behind Cilium, was acquired by Cisco in December 2023. This was one of the largest acquisitions of an open-source Kubernetes networking company, and it underscored how central eBPF-based networking had become to modern infrastructure. Cilium remains an open-source CNCF graduated project despite the acquisition.
Cilium replaced kube-proxy in production at scale before anyone else¶
Google's GKE Dataplane V2, launched in 2020, uses Cilium as its networking layer. This was the first major managed Kubernetes offering to replace kube-proxy with eBPF-based service routing in production at Google scale. The move validated Cilium's approach and triggered other cloud providers to accelerate their own eBPF adoption.
Hubble gives you Wireshark-like visibility without capturing packets¶
Cilium's observability layer, Hubble, provides flow-level visibility by reading eBPF event data rather than capturing raw packets. This means you get Layer 3/4/7 flow logs, DNS query logs, and HTTP request metadata with near-zero overhead. Traditional approaches required either packet capture (expensive) or sidecar proxies (complex); Hubble gets the data directly from the kernel.
Cilium's identity system assigns numbers, not IPs, to workloads¶
Instead of thinking about source and destination IPs, Cilium assigns a numeric "security identity" to each group of pods with the same labels. Network policies are evaluated against these identities, not IPs. This means policy enforcement survives pod rescheduling, IP reuse, and even cross-cluster communication — the identity follows the workload regardless of where it runs.
eBPF programs in Cilium are verified by the kernel before execution¶
Every eBPF program that Cilium loads into the kernel must pass the kernel's eBPF verifier, which proves the program will terminate, won't access invalid memory, and won't crash the kernel. This verification step is what makes eBPF fundamentally safer than kernel modules. The verifier analyzes every possible execution path — if it can't prove safety, the program is rejected.
Cilium can do transparent encryption with WireGuard or IPsec¶
Cilium supports encrypting all pod-to-pod traffic transparently using either WireGuard or IPsec, with a single configuration flag. The encryption happens at the node level in the kernel, so applications don't need any changes. WireGuard mode is especially efficient because WireGuard's kernel implementation adds minimal overhead compared to userspace TLS.
Bandwidth management in Cilium uses EDT, not token buckets¶
Cilium's bandwidth manager uses Earliest Departure Time (EDT) scheduling rather than traditional token bucket rate limiting. EDT, developed by Google's networking team (Van Jacobson and others), assigns each packet a timestamp for when it should depart, producing smoother traffic flows with less burstiness and lower latency than token bucket approaches. This is the same technique Google uses internally.