Comparison: Caching¶
Category: Storage Last meaningful update consideration: 2026-03 Verdict (opinionated): Redis unless you only need simple key-value caching with no data structures, no persistence, and no pub/sub — then Memcached is lighter. DragonflyDB is a promising Redis alternative for high-throughput workloads but is newer.
Quick Decision Matrix¶
| Factor | Redis | Memcached | DragonflyDB |
|---|---|---|---|
| Learning curve | Low-Medium | Very Low | Low (Redis-compatible) |
| Operational overhead | Medium | Low | Medium |
| Cost at small scale | Free (self-hosted) / ElastiCache | Free (self-hosted) / ElastiCache | Free (self-hosted) |
| Cost at large scale | Medium | Low | Potentially lower (fewer nodes) |
| Community/ecosystem | Massive | Large (mature, stable) | Growing |
| Hiring | Easy | Easy | Niche (but Redis-compatible) |
| Data structures | Rich (strings, hashes, lists, sets, sorted sets, streams, HyperLogLog) | Strings and binary blobs only | Redis-compatible data structures |
| Persistence | RDB snapshots, AOF, hybrid | None (pure cache) | Snapshots |
| Replication | Master-replica, Sentinel, Cluster | None built-in (client-side) | Replication (developing) |
| Pub/Sub | Yes | No | Yes |
| Lua scripting | Yes | No | Yes |
| Transactions | MULTI/EXEC | No | MULTI/EXEC |
| Memory efficiency | Moderate (overhead per key) | High (slab allocator) | High (shared-nothing architecture) |
| Multi-threading | Single-threaded (I/O threads in 6.0+) | Multi-threaded | Multi-threaded (shared-nothing) |
| Max memory | Limited by single instance | Limited by instance | Uses all cores efficiently |
| Cluster mode | Redis Cluster (hash slots) | Client-side consistent hashing | Single instance scales vertically |
| Protocol | RESP | Memcached text/binary protocol | RESP (Redis-compatible) |
When to Pick Each¶
Pick Redis when:¶
- You need more than simple caching: sorted sets for leaderboards, streams for event sourcing, pub/sub for real-time messaging
- You need persistence — Redis can function as a primary data store for specific use cases (session storage, rate limiting, feature flags)
- Your application uses Redis-specific data structures and commands
- You want cluster mode for horizontal scaling with built-in partitioning
- The ecosystem of Redis clients, tools, and managed offerings is important
Pick Memcached when:¶
- Your use case is pure caching: store computed results, cache database queries, cache API responses
- You want the simplest possible caching layer with minimal operational surface
- Multi-threaded performance matters — Memcached uses all cores natively
- Memory efficiency is critical — Memcached's slab allocator has less per-key overhead than Redis
- You do not need persistence, pub/sub, data structures, or scripting
- You want predictable eviction behavior (LRU) without Redis's eviction policy complexity
Pick DragonflyDB when:¶
- You need Redis-compatible but with better multi-core utilization
- Your Redis workload is hitting single-threaded throughput limits
- You want to consolidate multiple Redis instances into fewer, larger Dragonfly instances
- You are comfortable with a newer project that is less battle-tested than Redis
- You want Redis Cluster functionality without the operational complexity of managing cluster slots
Nobody Tells You¶
Redis¶
- Redis is single-threaded for command execution. I/O threads (Redis 6.0+) handle network I/O in parallel, but command processing is serial. One slow command (
KEYS *,SMEMBERSon a large set) blocks everything. KEYS *in production will hang your Redis instance. UseSCANinstead. This is the most common Redis footgun.- Redis memory usage is higher than the raw data size. Each key has overhead (dictEntry, redisObject, SDS string). A key storing a 10-byte value may consume 80+ bytes of Redis memory.
- Redis Cluster is operationally complex. Hash slots, resharding, and the requirement that all keys in a multi-key operation live in the same slot create application-level constraints.
- Redis Sentinel (HA) works but failover is not instant. During failover (5-30 seconds), writes fail. Applications must handle connection retry and reconnection logic.
- Redis Labs (now Redis Inc.) relicensed key modules (RediSearch, RedisJSON, etc.) under SSPL. The core Redis remains BSD, but advanced modules are source-available, not open source. Understand which features you need and which license applies.
- ElastiCache for Redis and MemoryDB for Redis are different products. ElastiCache is caching-optimized, MemoryDB is durability-optimized. Choose based on whether you need a cache or a database.
SAVE(synchronous RDB dump) blocks the server. UseBGSAVEfor background snapshots.SAVEin production is a P1 outage waiting to happen.
Memcached¶
- Memcached has no persistence. Restart = empty cache. This is by design, but applications must handle cache misses gracefully (cache stampede protection).
- The slab allocator wastes memory when value sizes vary widely. If you have 50-byte and 500-byte values, the slab class for 500-byte values wastes ~450 bytes per 50-byte value stored there. Tune slab classes or accept waste.
- Memcached has no built-in replication or clustering. Client-side consistent hashing distributes keys across servers, but losing a server loses that partition's cache. No failover.
- Memcached maximum value size is 1MB by default. Larger values require increasing the
-Iflag, but this conflicts with slab allocation. - There is no authentication in Memcached by default. SASL authentication exists but is not widely used. Memcached should live on a private network, never exposed publicly.
- Memcached's simplicity is its advantage. There are fewer things to misconfigure, fewer operational runbooks needed, and fewer failure modes. For pure caching, this simplicity is valuable.
- The Memcached community is mature and stable — which also means development is slow. Major new features are rare. This is fine because the use case (caching) does not require frequent innovation.
DragonflyDB¶
- DragonflyDB is young (first release 2022). Production deployments exist but are fewer than Redis by orders of magnitude. You are an early adopter.
- The "25x throughput" benchmarks are achievable in specific workloads (multi-core utilization) but are not universal. Benchmark your specific workload before committing.
- DragonflyDB's shared-nothing architecture uses all cores efficiently, which means a single Dragonfly instance can replace a Redis Cluster of 6+ nodes. This simplifies operations.
- Redis compatibility is high but not 100%. Some commands behave slightly differently, and some Redis modules are not supported. Test your application thoroughly.
- The managed offering (Dragonfly Cloud) is available but smaller than Redis's managed ecosystem (ElastiCache, MemoryDB, Redis Cloud, Upstash).
- DragonflyDB's licensing is BSL (Business Source License). It is source-available but not open source. If licensing matters, evaluate carefully.
- Persistence support is improving but less mature than Redis's RDB/AOF. For use cases requiring durability, test recovery scenarios.
Migration Pain Assessment¶
| From → To | Effort | Risk | Timeline |
|---|---|---|---|
| Memcached → Redis | Low-Medium | Low | 1-3 weeks |
| Redis → Memcached | Medium | Medium (losing features) | 2-4 weeks |
| Redis → DragonflyDB | Low | Medium | 1-2 weeks (compatibility testing) |
| DragonflyDB → Redis | Low | Low | 1 week |
| Self-hosted Redis → ElastiCache | Low | Low | 1-2 days |
| ElastiCache → self-hosted | Medium | Medium | 1-2 weeks |
Redis → DragonflyDB migration is the easiest path because DragonflyDB speaks RESP (Redis protocol). Point your application at Dragonfly, run your test suite, and monitor for compatibility issues. The risk is in edge cases that only appear under production load.
The Interview Answer¶
"Redis is the default choice because it covers caching plus data structures, pub/sub, persistence, and scripting — it's a Swiss Army knife for in-memory workloads. Memcached is the right choice when you need simple caching at high throughput without Redis's single-threaded bottleneck. DragonflyDB is interesting as a Redis-compatible alternative that uses all CPU cores, potentially replacing a Redis Cluster with a single instance, but it's newer and less proven. The operational principle is: treat your cache as ephemeral. Applications should handle cache misses gracefully — a cache failure should cause latency degradation, not a total outage."
Cross-References¶
- Topic Packs: Redis, Database Ops
- Related Comparisons: Relational Databases, Messaging