The 2026 Data Stack is Just Postgres: Unison Delivery for AI and Scale with Simplicity Complexity is the enemy of reliability. Discover how Fivex Labs replaces specialized infrastructure like Redis and Kafka by architecting highly tuned, scalable systems using modern PostgreSQL 17 and 18 features like AIO, TidStore, and SKIP LOCKED. Read more at: https://lnkd.in/dSjtiRj6
Fivex Labs’ Post
More Relevant Posts
-
Why Is Redis So Fast? Why Do Engineers Love It? Redis is designed around a single-threaded event loop — meaning every command is executed sequentially, one after another. • No thread contention. • No locking overhead. • No race conditions. 👉 This architecture makes operations atomic by default — each instruction fully completes before the next begins. Now add the real superpower: ⚡ In-Memory Storage (RAM) Redis keeps active data in memory, not on disk. Which means: • Sub-millisecond latency • Ultra-fast reads & writes • Predictable high performance under load That’s why it’s the go-to choice for: ✔️ Caching layers ✔️ Session stores ✔️ Leaderboards ✔️ Rate limiting ✔️ Real-time analytics In simple terms: Single-threaded execution + In-memory data = Blazing fast & naturally atomic When performance is critical, Redis is often the first building block added to the architecture. #Redis #SystemDesign #BackendEngineering #Scalability #PerformanceEngineering
To view or add a comment, sign in
-
Our simple Redis-based idempotency key check failed spectacularly at scale. We used a common pattern for idempotent APIs: on receiving a request, we'd SET a key in Redis with a short TTL. If the key already existed, we'd reject the request as a duplicate. This worked perfectly in staging and under moderate production load. Then a high-volume partner integration went live. Suddenly, our single Redis instance became a massive bottleneck. Command latency skyrocketed, leading to timeouts. Worse, under extreme contention, the read-then-write logic wasn't atomic enough, and we started seeing duplicate processing—the very thing we were trying to prevent. The fix was moving the idempotency check to DynamoDB, using a `ConditionExpression` to ensure the item (our idempotency key) didn't already exist. This `put` operation is atomic and scales horizontally in a way our single Redis instance couldn't. It shifted the bottleneck from a single compute resource to a distributed, managed database service designed for this exact workload. The lesson: a tool's theoretical guarantees can break down when its operational limits are reached. What works for 100 requests per second might not work for 10,000. What's your go-to pattern for implementing high-throughput idempotent writes? Let's connect — I often share system design lessons from the field. #SystemDesign #DistributedSystems #Scalability
To view or add a comment, sign in
-
Ever wondered how Redis is insanely fast… even though it’s single-threaded? 🤔 At first glance, “single-threaded” sounds like a limitation. But in the case of Redis, it’s actually one of its biggest strengths. Here’s why 👇 ⚡ 1. No context switching overhead Multi-threaded systems constantly switch between threads, which costs CPU time. Redis avoids this completely by sticking to a single thread for command execution. ⚡ 2. In-memory operations Redis stores data in RAM, not on disk. That means data access happens in microseconds, not milliseconds. ⚡ 3. Efficient data structures It uses highly optimized data structures (like hashes, lists, sets) that are designed for speed and low overhead. ⚡ 4. Event-driven architecture (I/O multiplexing) Instead of handling one request at a time in a blocking way, Redis uses an event loop (epoll/kqueue) to handle thousands of concurrent connections efficiently. ⚡ 5. Simplicity = Speed No locks, no race conditions, no thread contention. Less complexity means faster execution. ⚡ 6. Optional multithreading where it matters While command execution is single-threaded, Redis uses multiple threads for I/O operations (in newer versions), making it even more efficient. 💡 The takeaway: Being single-threaded doesn’t make Redis slow — it makes it predictable, efficient, and ridiculously fast for the RIGHT use cases. Sometimes, simplicity beats complexity. #Redis #BackendEngineering #SystemDesign #Performance #Databases #TechInsights
To view or add a comment, sign in
-
-
Last month, a customer migrated a 100TB database to Azure DocumentDB. This time, from HBase. The math? 100 TB. 300 TB with 3x replication. 430 TB factoring in 30%, conservatively, for compaction. With 32 TB disks, they needed 14 nodes. 17 with a few for Management. Annually, Compute was $115K at $561/month for each of the 17 nodes. Storage - $735K. 17 disks at $3.6K/month. Licensing came out to $85K. Roughly 5K/node for the year. $935K. We didn't even consider backups. It was on a different invoice :) On Azure Document, this was much, much simpler. With a 4-node replica set and 32 TB disks: Compute was $77K. $19K/year for each of the 4 replica sets. Storage made a larger dent at $360K ($90K/year per replica set). $437K. And that's it! No supplementary miscellaneous costs! 52% back. Yes, this too was heterogenous. But much more convenient at half the price. For more on Azure Document, check out https://lnkd.in/geja5SXd #AzureDocument #DocumentDB #TCO #DatabaseMigrations #Azure #DevOps Siddhesh Vethe, German Eichberger, Javier Figueroa, Vinod Sridharan, Gahl Levy, Patty Chow, Khelan Modi, Sudhanshu Vishodia, Prashanth Madi, Marcelo Fonseca, Jay Gordon
To view or add a comment, sign in
-
-
if you want to lower your cost across backup/licensing/storage/compute and fine with document model - do not hesitate to look us up on Azure DocumentDB. This is happenig now - savings reported by cx across migration even from native provider is in similar range -
Last month, a customer migrated a 100TB database to Azure DocumentDB. This time, from HBase. The math? 100 TB. 300 TB with 3x replication. 430 TB factoring in 30%, conservatively, for compaction. With 32 TB disks, they needed 14 nodes. 17 with a few for Management. Annually, Compute was $115K at $561/month for each of the 17 nodes. Storage - $735K. 17 disks at $3.6K/month. Licensing came out to $85K. Roughly 5K/node for the year. $935K. We didn't even consider backups. It was on a different invoice :) On Azure Document, this was much, much simpler. With a 4-node replica set and 32 TB disks: Compute was $77K. $19K/year for each of the 4 replica sets. Storage made a larger dent at $360K ($90K/year per replica set). $437K. And that's it! No supplementary miscellaneous costs! 52% back. Yes, this too was heterogenous. But much more convenient at half the price. For more on Azure Document, check out https://lnkd.in/geja5SXd #AzureDocument #DocumentDB #TCO #DatabaseMigrations #Azure #DevOps Siddhesh Vethe, German Eichberger, Javier Figueroa, Vinod Sridharan, Gahl Levy, Patty Chow, Khelan Modi, Sudhanshu Vishodia, Prashanth Madi, Marcelo Fonseca, Jay Gordon
To view or add a comment, sign in
-
-
I’m starting a new series called Weekly DX. The goal is simple - pick up one new piece of software every week, build something with it, and document the developer experience (DX). First up, Redis I’d always known the theory behind Redis, but I hadn’t actually implemented it until now. I expected a bit of a hurdle, but the DX was surprisingly intuitive. If you’re used to MongoDB, the workflow is remarkably similar. Key takeaways from the week : 1. Setup - Creating a database via their portal and getting it linked was near-instant. 2. Tooling - Redis Insight is excellent. It’s basically the "Mongo Compass" for Redis, very easy for a newcomer to visualize what’s actually happening with the data. 3. Performance: - For high-throughput needs, it’s easy to see why this is a standard. To get my hands dirty, I built a simple application with it and deployed it via Vercel. Code is here - https://lnkd.in/gAp_T98H I know I’ve only scratched the surface, but now that I’ve broken the ice, I’m excited to keep expanding my domain knowledge If you have suggestions for what I should dive into for Week 2, let me know.
To view or add a comment, sign in
-
DynamoDB handles 89 million requests per second at single-digit millisecond latency. But it took 5 generations of engineering to get there. The problem? Hot partitions. One viral moment sends millions of requests to a single partition key. That partition maxes out at 3,000 reads/sec while everything else sits idle. How they solved it: → Gen 1: Equal throughput per partition (broke instantly) → Gen 2: 5-minute burst buffer (useless for sustained traffic) → Gen 3: Dynamic capacity shifting (still limited by physical nodes) → Gen 4: Split for heat — hot partitions literally divide themselves in two → Gen 5: Global admission control — throughput decoupled from partitions entirely The meta twist? The token bucket fleet managing global capacity is itself distributed using consistent hashing. It's consistent hashing all the way down. Oh, and DynamoDB has almost nothing in common with the Dynamo paper it's named after. Amazon kept the name and rebuilt everything. I animated the full journey — from naive hash(key) % N to the 89M req/sec architecture: https://lnkd.in/eW4gvEy8 #DynamoDB #DistributedSystems #SystemDesign #AWS #Databases #SoftwareEngineering
DynamoDB Failed 5 Times. Then It Hit 89M req/sec.
https://www.youtube.com/
To view or add a comment, sign in
-
Redis Streams for Event Systems Most teams introduce Kafka before they actually need Kafka. For many event-driven systems, Redis Streams would have been enough. And much simpler. Redis Streams turn Redis into an event log. Producers append events. Consumers read them. But unlike traditional queues, events remain in the stream. Consumers track progress using consumer groups. Each consumer processes different messages in parallel. This allows horizontal scaling without losing ordering guarantees per stream. Real scenario. A notification service processes user events. Every action generates an event: UserRegistered OrderPlaced PasswordChanged The service needs: • Multiple workers • Reliable event delivery • Retry for failed messages Redis Streams handles this easily. Events are stored in a stream. Workers process them through consumer groups. Failed messages stay pending until acknowledged. No external infrastructure required. But Redis Streams is not Kafka. Retention is limited by memory. Long-term event storage is not its strength. Streams work best when events are processed quickly and discarded. Redis Streams shines in lightweight event-driven systems. Kafka shines in massive event pipelines. Most teams start at the wrong end. They choose complexity first. #Redis #EventDrivenArchitecture #DistributedSystems #SystemDesign
To view or add a comment, sign in
-
-
⚡ 𝐇𝐎𝐖 𝐑𝐄𝐃𝐈𝐒 𝐑𝐄𝐀𝐋𝐋𝐘 𝐖𝐎𝐑𝐊𝐒: 𝐓𝐇𝐄 𝐒𝐈𝐍𝐆𝐋𝐄-𝐓𝐇𝐑𝐄𝐀𝐃𝐄𝐃 𝐆𝐈𝐀𝐍𝐓 It seems counter-intuitive: how can 𝐑𝐞𝐝𝐢𝐬 be one of the fastest databases in the world while being single-threaded? Most modern databases use multi-threading to handle multiple users, but Redis takes a different path. It uses an 𝐄𝐯𝐞𝐧𝐭 𝐋𝐨𝐨𝐩 and 𝐈/𝐎 𝐌𝐮𝐥𝐭𝐢𝐩𝐥𝐞𝐱𝐢𝐧𝐠. Because Redis stores everything in memory (RAM), there is no waiting for slow disk seeks. By staying single-threaded, it avoids the massive overhead of "Context Switching" and "Locking" that multi-threaded systems face. It’s like a world-class chef who only cooks, while the operating system (using epoll or kqueue) handles all the "waiters" and "orders" in the background. This architecture allows Redis to handle millions of operations per second with sub-millisecond latency. For engineers, understanding this helps you realize why Redis is perfect for real-time leaderboards, session management, and caching, but less ideal for long-running, CPU-intensive computations. https://buff.ly/GYcqAMy #Redis #Backend #SystemDesign #Database #HighPerformance
To view or add a comment, sign in
-
⚡ If your database is slowing you down… you might not need a better database. You might just need Redis. Think about it: Every millisecond matters in modern applications. And that’s exactly where Redis shines 👇 🚀 In-memory speed = lightning-fast reads & writes 📦 Perfect for caching = reduce database load instantly 🔄 Pub/Sub = real-time communication made simple 🧠 Data structures = strings, hashes, lists, sets… all built-in But here’s the part people overlook… Redis isn’t just a cache ❗ It’s a performance layer. Used right, it can: ✅ Handle millions of requests ✅ Power leaderboards & real-time analytics ✅ Enable session management at scale But misuse it, and: ⚠️ Memory costs can explode ⚠️ Data persistence needs careful planning ⚠️ Complexity creeps in fast 💡 The real magic? Using Redis strategically, not everywhere. Cache what matters. Optimize what hurts. Measure everything. Are you using Redis as just a cache… or as a game-changer? #Redis #SystemDesign #BackendEngineering #Performance #Scalability #DevOps
To view or add a comment, sign in
-