Skip to content

Quiz: gRPC & Protocol Buffers

← Back to quiz index

6 questions

L1 (3 questions)

1. What are the four types of gRPC service methods and when would you use each?

Show answer 1. Unary: single request, single response — standard RPC calls (most common).
2. Server streaming: single request, stream of responses — real-time feeds, large result sets.
3. Client streaming: stream of requests, single response — file upload, batch processing.
4. Bidirectional streaming: both sides stream — chat, real-time collaboration. Choose based on data flow pattern. Most services start with unary and add streaming only when needed.

2. Why does gRPC use Protocol Buffers instead of JSON, and what are the trade-offs?

Show answer Protocol Buffers (protobuf) provide: binary serialization (5-10x smaller than JSON), schema enforcement (proto files define the contract), code generation (type-safe clients in any language), backward/forward compatibility (field numbers allow evolution). Trade-offs: not human-readable (need tools to inspect), requires proto compilation step, harder to debug with curl. Use gRPC for internal service-to-service communication where performance matters. Use JSON/REST for public APIs and browser clients.

3. What are gRPC interceptors and how are they used for cross-cutting concerns?

Show answer Interceptors (middleware) wrap every RPC call to add cross-cutting behavior. Unary interceptors handle single request/response; stream interceptors handle streaming calls. Common uses: logging (log every call with method, duration, status), authentication (validate tokens in metadata), metrics (count calls, measure latency), retry logic, and tracing (inject/extract trace context). Chain multiple interceptors in order. They are the gRPC equivalent of HTTP middleware and keep business logic clean of infrastructure concerns.

L2 (3 questions)

1. How does gRPC handle load balancing and why is it more complex than HTTP/1.1 load balancing?

Show answer gRPC uses HTTP/2, which multiplexes many requests over a single long-lived TCP connection. Traditional L4 load balancers (TCP) assign all requests on one connection to the same backend — defeating load balancing. Solutions:
1. L7 load balancing (Envoy, nginx with grpc_pass) that inspects HTTP/2 frames and distributes per-request.
2. Client-side load balancing (gRPC name resolver + pick_first/round_robin).
3. Look-aside load balancing (external service like xDS/Envoy). In Kubernetes, use headless services + client-side LB or an Envoy sidecar.

2. How do you implement deadlines and cancellation propagation in gRPC?

Show answer Set a deadline (absolute time) or timeout (relative duration) on every gRPC call — never make an RPC without one. The deadline propagates through the call chain: if service A calls B with a 5-second deadline, and B calls C, the remaining budget propagates to C. If the deadline expires, all in-flight RPCs in the chain are cancelled (DEADLINE_EXCEEDED). Servers should check ctx.Err() or context.is_active() and abort expensive work early. Without deadlines, a slow downstream causes thread/goroutine pile-up across all upstream services.

3. How do you evolve a protobuf schema without breaking existing clients?

Show answer Rules:
1. Never change a field number — they are the wire identity.
2. Never change a field type (int32 to string).
3. Add new fields with new field numbers — old clients ignore unknown fields.
4. Mark removed fields as reserved to prevent reuse.
5. Rename fields freely (wire format uses numbers, not names).
6. Use optional for new fields so old code gets default values. For major changes: create a new service version (v2) and run both until migration is complete. Test with old-client/new-server and new-client/old-server combinations.