Redis compatible servers all speak the same protocol, but they do not behave the same once you push them through different cache patterns.
This benchmark compares four Redis compatible engines across the jobs people actually care about in a cache: writes, reads, mixed traffic, batched requests, and message fanout.
Each engine shows up with two tested versions side by side. The charts stay in raw units from start to finish, so you can read the actual results instead of relative percentages.
Versions tested
- Redis: 8.0.0, 8.4.0
- DragonflyDB: v1.0.0, v1.37.0
- Valkey: 9.0.0, 9.0.3
- KeyDB: v6.3.0, v6.3.4
Throughput
Small writes
This shows raw write throughput with small values. DragonflyDB is fastest here, Redis and Valkey stay fairly close, and KeyDB comes in lower.
Reads from memory
This is repeated reads from data that is already cached in memory. DragonflyDB stays strong, Redis and Valkey cluster together, and KeyDB swings more between versions.
Mixed reads and writes
This is closer to a normal cache pattern, with reads doing most of the work and writes mixed in. DragonflyDB leads again, while Redis, Valkey, and KeyDB sit closer together.
Batched requests
This shows what happens when clients send requests in batches instead of one by one. Valkey leads clearly here, with DragonflyDB and Redis behind it and KeyDB lower.
Latency
Small writes
This chart looks at p95 write latency, which helps show how slow requests get once writes start to stack up. Redis and KeyDB are lowest here, DragonflyDB is close, and Valkey is higher.
Reads from memory
This is p95 latency for cache hits from memory. Redis stays consistently low, DragonflyDB is a bit higher, Valkey is slower here, and KeyDB varies more between versions.
Mixed reads and writes
This shows p95 latency under a more typical cache workload with both reads and writes. Redis is strongest here, KeyDB follows, DragonflyDB is slightly higher, and Valkey trails.
Batched requests
This is p95 latency when the client sends work in batches. DragonflyDB and Valkey look best here, while Redis and KeyDB sit higher on the slow tail.
Memory use
Reads from memory
This shows how much memory each engine needs to hold the same cached data during the read heavy test. DragonflyDB is clearly the most efficient here.
Mixed reads and writes
This is the same memory efficiency view under mixed cache traffic. The overall shape stays the same, with DragonflyDB using less memory for the same payload.
Batched requests
This shows memory efficiency during the batched request test. DragonflyDB still leads, while Redis and Valkey stay close and KeyDB uses more overhead.
Message fanout
Fanout latency
This shows how quickly a published message reaches multiple subscribers. The field is fairly tight, but Valkey is slower while Redis, KeyDB, and the newer DragonflyDB run stay lower.
Fanout peak memory
This shows peak memory during the fanout test. Most runs stay low, while DragonflyDB v1.0.0 stands out as the clear outlier.
How the Tests Were Performed
These benchmarks ran on this Mac mini with an Apple M4 chip, macOS 26.3, and Docker 29.1.3.
- Each engine and version ran under the same Docker limits: 8 CPU cores, 4 GiB container memory, and 3.40 GiB
maxmemory. - Each result is the median of 5 fresh container repeats, with 15 seconds of warmup, 45 seconds of measurement, and 20 seconds of cooldown.
- The suite covers small writes, reads from memory, mixed reads and writes, batched requests, and pub/sub fanout.
- Latency charts use p95.
- DragonflyDB pub/sub was rerun on Mar 20, 2026 because the older v1.0.0 peak memory result looked unusual, and the rerun reproduced the same pattern.
Conclusion
Across the raw results, DragonflyDB looks strongest on the standard cache charts and on memory use, Valkey leads the batched request charts, and Redis 8.4.0 keeps the cleanest latency on the simpler request patterns.
The charts above are the best way to see which of those tradeoffs matters most for your workload.
Let us know what you would like us to benchmark next.
Happy Benchmark Friday!