- devops
- l2
- topic-pack
- rabbitmq
- kafka --- Portal | Level: L2: Operations | Topics: RabbitMQ & Message Queues, Kafka | Domain: DevOps & Tooling
RabbitMQ & Message Queues - Primer¶
Why This Matters¶
RabbitMQ is an open-source message broker implementing the Advanced Message Queuing Protocol (AMQP). It sits between services that produce messages (publishers) and services that consume them, enabling asynchronous processing, service decoupling, and reliable delivery. If a consumer goes down, messages wait in the queue until it recovers. RabbitMQ is a foundational tool for building resilient, loosely-coupled systems — still widely deployed alongside or instead of Kafka. Unlike Kafka (which is a distributed log optimized for high-throughput streaming), RabbitMQ is a traditional message broker optimized for flexible routing, message acknowledgment, and delivery guarantees.
Core Concepts¶
1. Architecture¶
- Publisher — an application that sends messages to an exchange.
- Exchange — receives messages from publishers and routes them to queues based on rules. Four types: direct (exact routing key match), fanout (broadcast to all bound queues), topic (pattern matching with wildcards), and headers (match on message headers).
- Queue — a buffer that stores messages until a consumer retrieves them. Can be durable (survives broker restart) or transient.
- Binding — a rule linking an exchange to a queue, optionally filtered by a routing key.
- Routing Key — a string attached to each message that exchanges use to decide which queues receive it.
- Consumer — an application that reads and acknowledges messages from a queue.
- Virtual Host (vhost) — logical partition within a broker for isolating environments or tenants.
2. Exchange Types¶
| Type | Routing Logic | Use Case |
|---|---|---|
| direct | Exact match on routing key | Task queues, command routing |
| fanout | Broadcast to all bound queues | Notifications, event broadcasting |
| topic | Pattern match with * (one word) and # (zero or more) |
Log routing, event categorization |
| headers | Match on message headers (ignore routing key) | Complex routing without string keys |
# Declare exchanges
rabbitmqadmin declare exchange name=logs type=topic durable=true
rabbitmqadmin declare exchange name=events type=fanout durable=true
rabbitmqadmin declare exchange name=tasks type=direct durable=true
3. Management CLI (rabbitmqctl and rabbitmqadmin)¶
# rabbitmqctl — core management tool (runs on broker node)
rabbitmqctl status # broker status
rabbitmqctl list_queues name messages consumers # queue overview
rabbitmqctl list_exchanges name type # exchange list
rabbitmqctl list_connections # active connections
rabbitmqctl list_channels # active channels
rabbitmqctl list_consumers # active consumers
rabbitmqctl list_vhosts # virtual hosts
rabbitmqctl list_users # users
rabbitmqctl list_permissions # permissions
# rabbitmqadmin — HTTP API CLI (can run remotely)
# Install: download from http://localhost:15672/cli/rabbitmqadmin
# Declare a durable queue
rabbitmqadmin declare queue name=task_queue durable=true
# Declare a binding (connect exchange to queue)
rabbitmqadmin declare binding source=logs destination=error_queue \
routing_key="*.error"
# Publish a message
rabbitmqadmin publish exchange=amq.default routing_key=task_queue \
payload="build #42 started" properties='{"delivery_mode": 2}'
# delivery_mode: 2 = persistent (survives broker restart)
# Get messages (peek without consuming)
rabbitmqadmin get queue=task_queue count=5 ackmode=ack_requeue_true
# Consume (remove from queue)
rabbitmqadmin get queue=task_queue ackmode=ack_requeue_false
# Purge a queue (delete all messages)
rabbitmqadmin purge queue name=task_queue
# Delete a queue
rabbitmqadmin delete queue name=old_queue
4. Queue Patterns¶
Work queues (competing consumers):
Multiple consumers pull from the same queue. RabbitMQ distributes messages round-robin. Use prefetch_count to prevent one slow consumer from getting all messages.
# Set prefetch count (via management API)
# In application code:
# channel.basic_qos(prefetch_count=10)
# Check consumer count
rabbitmqctl list_queues name consumers messages_unacknowledged
Dead letter queues (DLQ):
# Declare a queue with dead-letter exchange
rabbitmqadmin declare queue name=orders durable=true \
arguments='{"x-dead-letter-exchange": "dlx", "x-dead-letter-routing-key": "orders.dead", "x-message-ttl": 86400000}'
rabbitmqadmin declare exchange name=dlx type=direct durable=true
rabbitmqadmin declare queue name=orders_dead durable=true
rabbitmqadmin declare binding source=dlx destination=orders_dead routing_key=orders.dead
# Messages rejected, expired, or exceeding max length go to orders_dead
Priority queues:
rabbitmqadmin declare queue name=priority_tasks durable=true \
arguments='{"x-max-priority": 10}'
# Publish with priority
rabbitmqadmin publish exchange=amq.default routing_key=priority_tasks \
payload="urgent task" properties='{"priority": 9}'
Quorum queues (replicated, durable — recommended for production):
rabbitmqadmin declare queue name=important_events durable=true \
arguments='{"x-queue-type": "quorum"}'
# Quorum queues replicate across cluster nodes using Raft consensus
# Safer than classic mirrored queues (which are deprecated)
5. Clustering¶
# Join a node to an existing cluster
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@node1
rabbitmqctl start_app
# Check cluster status
rabbitmqctl cluster_status
# Remove a node from cluster
rabbitmqctl forget_cluster_node rabbit@node3
# Set cluster name
rabbitmqctl set_cluster_name production-rabbitmq
Cluster configuration (rabbitmq.conf):
# /etc/rabbitmq/rabbitmq.conf
cluster_formation.peer_discovery_backend = dns
cluster_formation.dns.hostname = rabbitmq.service.consul
cluster_formation.node_cleanup.interval = 30
cluster_formation.node_cleanup.only_log_warning = true
# Networking
listeners.tcp.default = 5672
management.listener.port = 15672
# Memory limits
vm_memory_high_watermark.relative = 0.6
vm_memory_high_watermark_paging_ratio = 0.5
# Disk limits
disk_free_limit.relative = 1.5
6. Monitoring¶
# Enable management plugin (HTTP API + web UI)
rabbitmq-plugins enable rabbitmq_management
# Web UI: http://localhost:15672 (default: guest/guest, local only)
# Enable Prometheus plugin
rabbitmq-plugins enable rabbitmq_prometheus
# Metrics endpoint: http://localhost:15692/metrics
# Key metrics to monitor:
# - Queue depth (messages waiting)
# - Consumer utilization (are consumers keeping up?)
# - Memory usage vs watermark
# - Disk free vs limit
# - Unacknowledged messages (consumers processing but not acking)
# - Connection/channel churn (high churn = application issue)
# Health check
rabbitmqctl node_health_check
rabbitmq-diagnostics check_running
rabbitmq-diagnostics check_port_connectivity
rabbitmq-diagnostics status
# Alarms (triggered when resource limits are hit)
rabbitmqctl list_alarms
# When a memory or disk alarm fires, publishing is blocked cluster-wide
# Per-queue metrics
rabbitmqctl list_queues name \
messages messages_ready messages_unacknowledged \
consumers consumer_utilisation memory
7. User and Permission Management¶
# Add user
rabbitmqctl add_user myapp 'secure_password'
# Set permissions (configure, write, read regex on vhost)
rabbitmqctl set_permissions -p / myapp "^myapp\." "^myapp\." "^myapp\."
# Can only configure/write/read queues/exchanges starting with "myapp."
# Set user tags (management UI access)
rabbitmqctl set_user_tags myapp monitoring
# Create a vhost
rabbitmqctl add_vhost production
rabbitmqctl set_permissions -p production myapp ".*" ".*" ".*"
# Delete default guest user (security)
rabbitmqctl delete_user guest
# List permissions
rabbitmqctl list_user_permissions myapp
8. Operational Patterns¶
Message acknowledgment: Always use manual acks in production. Auto-ack loses messages if the consumer crashes mid-processing.
Publisher confirms: Enable publisher confirms to know when the broker has persisted your message. Without confirms, published messages can be lost if the broker crashes.
Connection recovery: Use client libraries with automatic reconnection (most official clients support this). RabbitMQ connections are long-lived — do not open/close per message.
Shovel and Federation: Move messages between brokers (different datacenters, migration).
rabbitmq-plugins enable rabbitmq_shovel rabbitmq_shovel_management
# Configure via management API to replicate queues across brokers
Quick Reference¶
# Service management
sudo systemctl start/stop/restart rabbitmq-server
# Status
rabbitmqctl status
rabbitmqctl cluster_status
rabbitmqctl list_queues name messages consumers
# Queue operations
rabbitmqadmin declare queue name=myqueue durable=true
rabbitmqadmin publish exchange=amq.default routing_key=myqueue payload="hello"
rabbitmqadmin get queue=myqueue ackmode=ack_requeue_false
rabbitmqadmin purge queue name=myqueue
# Cluster
rabbitmqctl join_cluster rabbit@node1
rabbitmqctl cluster_status
# Plugins
rabbitmq-plugins enable rabbitmq_management
rabbitmq-plugins enable rabbitmq_prometheus
rabbitmq-plugins list
Wiki Navigation¶
Related Content¶
- Kafka (Topic Pack, L1) — Kafka
- Kafka Flashcards (CLI) (flashcard_deck, L1) — Kafka