eBPF Observability — Trivia & Interesting Facts¶
Surprising, historical, and little-known facts about eBPF observability.
BPF was originally created to filter network packets efficiently in 1992¶
Steven McCanne and Van Jacobson at Lawrence Berkeley National Laboratory created the Berkeley Packet Filter (BPF) in 1992 for tcpdump. Their key insight was that running a small program inside the kernel to filter packets was far more efficient than copying all packets to user space. This original BPF was a simple register-based virtual machine with just two registers.
eBPF was called "the biggest change to Linux since Linux itself"¶
Brendan Gregg, the renowned performance engineer, described eBPF as the biggest innovation in Linux observability since the creation of Linux in 1991. eBPF allows custom programs to run safely inside the kernel without modifying kernel source code or loading kernel modules — something previously considered impossible without compromising system stability.
The eBPF verifier rejects programs that might crash the kernel¶
Every eBPF program must pass a static analysis verifier before it can run in the kernel. The verifier checks for infinite loops, out-of-bounds memory access, uninitialized variables, and other unsafe patterns. Programs that fail verification are rejected entirely. This verifier, while sometimes frustratingly strict, is what makes it safe to run custom code in the kernel in production.
Cilium replaced iptables with eBPF and made Kubernetes networking 10x faster¶
Cilium, the eBPF-based Kubernetes CNI plugin created by Isovalent (founded 2017), replaced iptables-based packet processing with eBPF programs attached to network interfaces. In benchmarks, this approach showed up to 10x improvement in network throughput and dramatically lower latency for service mesh scenarios. Cilium became the default CNI for Google Kubernetes Engine in 2023.
eBPF can trace every function call in the Linux kernel without recompilation¶
Using kprobes and tracepoints, eBPF programs can attach to virtually any function in the Linux kernel and inspect arguments, return values, and internal state. This means you can answer questions like "which process is causing disk I/O latency spikes" or "what DNS queries is this container making" without modifying any application code or kernel configuration.
Netflix uses eBPF to debug performance issues in real-time on production servers¶
Netflix's performance engineering team uses bcc (BPF Compiler Collection) and bpftrace tools extensively in production. They can trace TCP retransmits, analyze filesystem latency, profile CPU scheduler decisions, and inspect memory allocation patterns — all without restarting services or deploying instrumented builds. This capability has reduced their mean time to diagnose performance issues from hours to minutes.
bpftrace was modeled after DTrace, which was considered a lost technology after the Solaris decline¶
DTrace, created at Sun Microsystems in 2003, was a revolutionary dynamic tracing framework for Solaris. When Oracle effectively killed Solaris development, the Linux community lost access to DTrace's capabilities. Brendan Gregg and Alastair Robertson created bpftrace (2018) as a high-level tracing language for eBPF that brought DTrace-like power to Linux, closing a decade-long observability gap.
eBPF programs share data with user space through special map data structures¶
eBPF maps — hash tables, arrays, ring buffers, and other data structures shared between kernel-space eBPF programs and user-space applications — are the primary mechanism for getting observability data out of the kernel. A single eBPF hash map can track millions of entries (e.g., per-connection latency histograms) with minimal overhead, enabling real-time analytics that would be impossible with traditional kernel interfaces.
Microsoft and Apple are both implementing eBPF-like technology for their kernels¶
The success of eBPF on Linux prompted Microsoft to develop eBPF for Windows (announced 2021), enabling the same observability tools to work cross-platform. Apple has also explored similar kernel extension safety mechanisms. The eBPF Foundation, established under the Linux Foundation in 2021, aims to standardize eBPF across operating systems.
A single eBPF program can replace an entire sidecar proxy in a service mesh¶
Traditional service meshes like Istio use sidecar proxies (Envoy) that add latency and memory overhead per pod. eBPF-based service meshes like Cilium Service Mesh can handle L3/L4 networking, mTLS, and observability directly in the kernel, eliminating the sidecar entirely. This "sidecar-less" approach reduces per-pod memory overhead from ~50 MB (Envoy) to essentially zero for the networking layer.