Skip to content

Envoy Proxy — Trivia & Interesting Facts

Surprising, historical, and little-known facts about Envoy Proxy.


Envoy was built at Lyft by Matt Klein in 2015–2016

Matt Klein, a senior infrastructure engineer at Lyft, began writing Envoy in 2015 to solve Lyft's microservices reliability problems. Lyft was decomposing a monolith into hundreds of services and encountering the full distributed systems failure mode zoo: cascading failures, opaque timeouts, connection pool exhaustion, and no distributed tracing. Klein wrote Envoy in C++ specifically because Lyft's services were a mix of languages and he needed a network proxy that was language-agnostic. The initial open-source release was September 2016.


Envoy graduated from CNCF in November 2018

Envoy was donated to the Cloud Native Computing Foundation (CNCF) and accepted as an incubating project in September 2017. It graduated to CNCF top-level project status in November 2018 — just 14 months after entering incubation, one of the fastest graduations in CNCF history at the time. This made Envoy one of only four projects (alongside Kubernetes, Prometheus, and CoreDNS) to hold graduated status at that point.


The xDS API became a vendor-neutral standard

When Lyft open-sourced Envoy, the xDS (discovery service) protocol was Envoy-specific. As other proxy projects (including gRPC itself) adopted xDS for service discovery, CNCF formalized xDS as an independent vendor-neutral API standard. Today xDS v3 is implemented not just by Envoy but also by gRPC's built-in load-balancing, Cilium, and several other data plane projects. A control plane that speaks xDS can manage multiple different proxy implementations.


Envoy is written in C++ for deterministic latency

The choice of C++ over Go or Rust was deliberate: C++ gives deterministic memory management without garbage collector pauses. In a proxy that handles millions of requests per second, a GC pause of even a few milliseconds causes visible latency spikes. Envoy avoids this by using a custom memory allocator (tcmalloc by default) and by explicitly managing object lifetimes. This makes Envoy harder to contribute to but more predictable in production.


Hot restart was designed so Lyft could deploy without downtime windows

Before Envoy, Lyft's service proxies required maintenance windows for upgrades because they could not restart without dropping connections. Klein designed hot restart — where a new Envoy process takes over sockets from the old one without dropping connections — as a first-class feature, not an afterthought. The protocol for passing socket file descriptors between Envoy processes is implemented in approximately 300 lines of C++ in source/server/hot_restart_impl.cc.


Every major service mesh uses Envoy as its data plane

Istio, AWS App Mesh, Consul Connect (HashiCorp), Kuma (Kong), Gloo Mesh (Solo.io), and Cilium (in sidecar-optional mode) all use Envoy as their data plane proxy. The control planes differ significantly — Istio's istiod is complex; Consul's xDS server is integrated into the Consul agent — but the traffic-handling binary is the same Envoy executable in each case. When you debug a service mesh in production, you are always ultimately reading Envoy stats and logs, regardless of the control plane brand.


Envoy's WebAssembly plugin system replaces Lua and C++ extension points

Early Envoy extensibility required writing C++ filters compiled into the binary. Lua filters were added as a scripting alternative but had limitations. The WASM (WebAssembly) extension system, developed collaboratively by Google, Istio, and the Envoy community, provides true plugin isolation: WASM modules run in a sandboxed VM, cannot crash the proxy process, and can be loaded and unloaded at runtime. Envoy uses the Proxy-WASM ABI specification, which is also implemented by NGINX (via the ngx_wasm_module), enabling portable filter code that runs on multiple proxy implementations.


Envoy Mobile brought the same proxy to iOS and Android

In 2019, the Lyft team announced Envoy Mobile, a port of Envoy's core networking stack to mobile clients. Envoy Mobile runs as a library inside iOS and Android apps, giving mobile clients the same circuit breaking, retry logic, observability, and protocol support (HTTP/2, gRPC) that services get in the mesh. The project was motivated by the observation that mobile clients are the outermost edge of the service mesh but receive none of its reliability guarantees. Envoy Mobile merged into the main Envoy repository in 2022.


Envoy's GitHub star count grew from zero to 20,000+ in under four years

When Envoy was open-sourced in September 2016, it was unknown outside Lyft. By 2018, it had 5,000 stars. By 2020, after Istio's adoption drove mainstream awareness, it had crossed 15,000 stars. By early 2022 it exceeded 20,000 stars, a growth rate faster than most major infrastructure projects. The growth closely tracked Kubernetes adoption: as organizations moved to k8s and discovered they needed a service mesh, Envoy became the answer to "what runs the sidecar."


Envoy added gRPC-Web support to enable browsers to call gRPC services directly

gRPC uses HTTP/2 trailers to communicate status codes — a feature browsers cannot access via the standard Fetch API. Envoy's gRPC-Web filter bridges the gap: a browser sends an HTTP/1.1 (or HTTP/2 without trailers) gRPC-Web request to Envoy, which translates it to a standard gRPC request to the upstream service and translates the response back. This allows browser JavaScript to call gRPC backends without a separate REST transcoding layer. The feature was jointly developed by Google and the Envoy community and is now part of the official gRPC specification as the gRPC-Web protocol.