close

DEV Community

H33.ai
H33.ai

Posted on • Originally published at cachee.ai

Why We Stopped Using Redis (And Built a Sub-Microsecond Cache in Rust Instead)

Redis Is a Network Hop

Every Redis call is a TCP round-trip. At 96 concurrent workers doing FHE operations, a single Redis container serialized all connections. Our throughput dropped from 1.51M to 136K operations per second — an 11x regression.

The Fix: In-Process DashMap

Cachee replaced Redis in our hot path with an in-process Rust cache:

  • 0.085 microseconds per lookup (vs ~50 microseconds for Redis RTT)
  • 44x faster than even raw STARK proof verification
  • Zero TCP contention — no serialization bottleneck
  • CacheeLFU eviction with Count-Min Sketch admission (512 KiB constant memory)

When Redis Still Makes Sense

Redis is fine for sorted sets, pub/sub, and multi-instance shared state. But if you are doing millions of lookups per second on a single instance, an in-process cache eliminates the network entirely.

We kept Redis for leaderboard sorted sets only. Everything else — rate limiting, sessions, ZKP proof caching — moved to Cachee.

Result: 1,667,875 authenticated operations per second on a single Graviton4 instance.

Cachee — post-quantum cache engine.


Introducing H33-74. 74 bytes. Any computation. Post-quantum attested. Forever.

Top comments (0)