Quiz: Message Queues¶
6 questions
L1 (3 questions)¶
1. What is the difference between a message queue (RabbitMQ) and a log-based broker (Kafka), and when do you choose each?
Show answer
Message queue: messages are consumed and deleted (each message delivered once to one consumer in a group). Good for task distribution, work queues, RPC. Log-based broker: messages are appended to a persistent log and retained for a configurable period. Consumers track their offset. Good for event sourcing, stream processing, replay. Choose RabbitMQ when you need complex routing (exchanges, bindings, dead-letter queues) and message-level acknowledgment. Choose Kafka when you need high throughput, replay capability, and multiple consumers reading the same stream independently.2. What does 'at-least-once' vs 'at-most-once' vs 'exactly-once' delivery mean, and which should you design for?
Show answer
At-most-once: message may be lost but never duplicated (ack before processing). At-least-once: message is never lost but may be duplicated (ack after processing, retry on failure). Exactly-once: message delivered and processed exactly once (requires idempotent consumers or transactional processing). Design for at-least-once with idempotent consumers — it is the practical sweet spot. True exactly-once is extremely expensive and often achieved by making at-least-once processing idempotent (deduplication keys, upserts instead of inserts).3. What is backpressure in a messaging system and how do you handle it?
Show answer
Backpressure occurs when a consumer cannot process messages as fast as they arrive. The queue grows, memory fills, and eventually messages are dropped or the broker destabilizes. Handling:1. Scale consumers horizontally (add more workers/partitions).
2. Set queue length limits with reject or dead-letter policies.
3. Implement consumer-side rate limiting with prefetch count (RabbitMQ: prefetch_count=10).
4. Use flow control (producer slows down when queue is full).
5. Monitor queue depth and consumer lag — alert before critical thresholds.
L2 (3 questions)¶
1. How do you handle poison messages (messages that repeatedly fail processing) in a message queue?
Show answer
1. Set a maximum retry count (delivery count or redelivery limit).2. After max retries, route the message to a dead-letter queue (DLQ).
3. Monitor the DLQ — alert when messages appear.
4. Build tooling to inspect, fix, and replay DLQ messages.
5. Add structured error metadata on each failure (timestamp, error message, stack trace) as message headers. Never silently discard failed messages. In Kafka: use a retry topic with backoff, then a DLQ topic. In RabbitMQ: configure x-dead-letter-exchange on the queue.
2. How do Kafka consumer groups work and what happens during a rebalance?
Show answer
A consumer group is a set of consumers that cooperatively consume a topic. Each partition is assigned to exactly one consumer in the group. When a consumer joins or leaves (crash, scale up/down), a rebalance occurs: the group coordinator reassigns partitions. During rebalance, consumption pauses — this can cause latency spikes. Mitigation: use cooperative sticky rebalancing (incremental rebalance, only moved partitions pause), tune session.timeout.ms and heartbeat.interval.ms, and minimize consumer group churn. Static group membership (group.instance.id) prevents rebalance on brief disconnects.3. How do you implement the outbox pattern to ensure reliable message publishing alongside database transactions?
Show answer
Problem: writing to both a database and a message broker is not atomic — one can succeed while the other fails. Outbox pattern:1. Write the business data AND the outgoing message to the same database in one transaction (message goes into an 'outbox' table).
2. A separate process (poller or CDC connector) reads the outbox table and publishes messages to the broker.
3. Mark messages as published after broker acknowledgment. This guarantees at-least-once publishing without distributed transactions. Debezium with Kafka Connect is a popular CDC-based implementation.