Skip to content

gRPC — Trivia & Interesting Facts

Surprising, historical, and little-known facts about gRPC and remote procedure calls.


gRPC is Google's third-generation internal RPC framework

The "g" in gRPC does not stand for "Google" — officially it stands for something different in every release (gRPC, gRecursive, gRemarkable, etc.). Internally, Google used Stubby as their RPC framework since the early 2000s, handling billions of requests per second across Google's infrastructure. gRPC was created as an open-source successor to Stubby, released in 2015, bringing Google's internal RPC patterns to the rest of the world.


HTTP/2 was chosen as the transport specifically for multiplexing

gRPC runs on HTTP/2, which provides multiplexed streams over a single TCP connection. This means multiple concurrent RPC calls can share one connection without head-of-line blocking at the application layer. The choice of HTTP/2 also means gRPC can traverse existing HTTP infrastructure (proxies, load balancers, firewalls), though many of these components initially couldn't handle HTTP/2 properly and caused mysterious failures.


Protocol Buffers are 3-10x smaller and 20-100x faster than JSON

gRPC uses Protocol Buffers (protobuf) as its default serialization format. Because protobuf uses a compact binary encoding with field numbers instead of field names, a typical protobuf message is 3-10x smaller than its JSON equivalent. Serialization and deserialization are 20-100x faster than JSON parsing. This matters enormously in microservice architectures where serialization overhead can dominate latency.


gRPC supports four communication patterns, not just request-response

Unlike REST which is fundamentally request-response, gRPC supports four patterns: unary (request-response), server streaming (one request, stream of responses), client streaming (stream of requests, one response), and bidirectional streaming (both sides stream simultaneously). Bidirectional streaming enables use cases like real-time chat, live monitoring, and collaborative editing that are awkward or impossible with REST.


The gRPC health checking protocol was standardized to fix a real problem

Before the gRPC health checking protocol (defined in grpc.health.v1), every gRPC service had its own ad-hoc way of reporting health, making it impossible to build generic load balancers and orchestrators. The standardized health check service uses a simple protobuf interface that returns SERVING, NOT_SERVING, or UNKNOWN. Kubernetes gRPC probes (added in Kubernetes 1.24, 2022) use this protocol natively.


gRPC-Web exists because browsers cannot do HTTP/2 at the frame level

Browsers support HTTP/2 for fetching resources, but the browser JavaScript API (fetch/XHR) does not expose the HTTP/2 framing layer needed for gRPC's bidirectional streaming. gRPC-Web is a modified protocol that works within browser constraints, using HTTP/1.1 or HTTP/2 but encoding gRPC frames in a browser-compatible way. It requires a proxy (like Envoy) to translate between gRPC-Web and native gRPC.


Deadline propagation is gRPC's killer feature that nobody talks about

gRPC has built-in deadline propagation: when Service A calls Service B with a 5-second deadline, and Service B calls Service C, the remaining time is automatically passed to Service C. If 3 seconds were spent in A-to-B, Service C gets only 2 seconds. This prevents the cascading timeout problem where downstream services waste resources on requests that the upstream caller has already abandoned.


gRPC uses HTTP/2 trailers for status codes, breaking many proxies

In gRPC, the status code and error message are sent in HTTP/2 trailing headers (trailers) after the response body. Many HTTP proxies, CDNs, and load balancers either strip trailers or don't support them, which silently corrupts gRPC responses. This incompatibility was the source of countless debugging sessions in the early days of gRPC adoption and drove the creation of gRPC-Web.


The protobuf "field number" system enables backward compatibility by design

In protobuf, fields are identified by number, not name. This means you can add new fields (with new numbers), remove fields (by reserving old numbers), and rename fields without breaking existing clients. This wire-format stability is why gRPC services can be upgraded independently — a property that REST APIs with JSON achieve only through careful convention, not structural guarantee.


Envoy proxy was built largely because of gRPC

Envoy was created at Lyft in 2016 partly because existing proxies (nginx, HAProxy) couldn't handle gRPC properly — they lacked HTTP/2 upstream support, trailer handling, and gRPC-aware load balancing. Envoy's native gRPC support, including gRPC-JSON transcoding and gRPC health checking, made it the de facto proxy for gRPC services and eventually the foundation of most service meshes.