We’ve just shipped 🚀 @𝗱𝗳𝘀𝘆𝗻𝗰/𝗰𝗹𝗶𝗲𝗻𝘁 𝟬.𝟳.𝘅. This release is focused on something that usually gets overlooked — observability in service-to-service communication. When requests start failing in production, it’s rarely about “did it fail?” It’s about: - how long did it take? - how many retries happened? - why did we retry? - what actually changed between attempts? 𝗜𝗻 𝟬.𝟳.𝘅 𝘄𝗲 𝗺𝗮𝗱𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁. You now get: - request timing metadata (startedAt, endedAt, durationMs) - retry metadata (attempt, delay, reason) - support for Retry-After header - retry lifecycle hook (onRetry) - clear retry delay source (backoff vs retry-after) This makes debugging, monitoring, and reasoning about distributed systems much more predictable. Next step → integration safety (0.8.x): - response validation - idempotency keys Step by step, making service-to-service communication more reliable. NPM: https://lnkd.in/dWtHvhNS --- GitHub: https://lnkd.in/d2Drgx4i #nodejs #typescript #microservices #backend #softwareengineering #observability #distributedSystems #opensource
dfsync’s Post
More Relevant Posts
-
After shipping 0.6.x, @dfsync/client is starting to feel like something we actually wanted to have in real projects. What changed is not just features - it’s how requests behave across services. You can now: • cancel requests properly via AbortSignal • pass metadata through the whole request lifecycle (context) • propagate correlation IDs across services (x-request-id) • access richer execution details inside hooks It’s a small set of things, but together they make service-to-service communication much more predictable. No more guessing what happened with a request. If you’ve ever debugged retries, timeouts, or “random” failures between services - you know why this matters. GitHub: https://lnkd.in/d2Drgx4i NPM: https://lnkd.in/dWtHvhNS --- We’re now moving towards observability and safer integrations next. If this sounds useful — feel free to take a look or drop feedback. #opensource #nodejs #microservices #backend #softwareengineering #api
To view or add a comment, sign in
-
-
Server Actions changed how I think about forms. No more writing API routes for simple mutations. No more managing loading states manually. No more form libraries for basic CRUD. The pattern is: form submits directly to a server function. Validation happens server-side. Revalidation is automatic. It's not perfect for everything — complex multi-step flows still need client state. But for 80% of form interactions, this is a massive simplification. Less code, fewer bugs, faster delivery. Have Server Actions replaced your API routes yet? #NextJS #React #ServerActions #WebDev
To view or add a comment, sign in
-
Type safety often breaks at the API boundary. Frontend and backend define their own types, leading to duplication, mismatches, and runtime errors⚠️ tRPC takes a different approach by enabling end-to-end type safety using TypeScript inference🧠, without requiring schema definitions or code generation. https://lnkd.in/g-fgqxaf Types are shared directly between client and server, reducing complexity and improving development speed⚡ The result is a simpler, more reliable way to build APIs where type consistency is guaranteed. #TypeScript #API #SoftwareArchitecture #WebDevelopment #Tech
To view or add a comment, sign in
-
💡 **Full Stack Debugging Tip** When a feature breaks, don’t guess—trace the flow. Check step by step: Frontend → API → Backend → Database 🎯 Identify exactly where the issue occurs. This saves hours of random debugging. #Debugging #FullStack #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Built and Deployed a High-Performance Load Testing Platform Over the past few days, I worked on building a system to simulate high-concurrency API traffic and analyze performance in real time. 💡 Tech Stack: Go (high-performance load engine using goroutines) Node.js (orchestrator & API layer) React (interactive dashboard) ⚙️ What it does: Simulates thousands of concurrent requests Supports both fixed-request and RPS-based load testing Measures latency (P50, P95, P99), throughput, and error rates Displays results in a clean UI dashboard 🚀 Key Engineering Highlights: Implemented worker pool pattern in Go for efficient concurrency Used token-bucket rate limiting for accurate RPS control Optimized HTTP client for connection reuse Built lock-free metrics collection using atomic operations Solved real-world deployment issues (Linux permissions, binary execution, environment mismatch) 📊 Successfully tested: 1000+ requests 50 concurrent users ~450+ req/sec throughput 🔧 Live Demo: 👉 https://lnkd.in/g-bJS7_H 💻 GitHub: 👉 https://lnkd.in/gyJYqGGC This project helped me dive deeper into system design, performance tuning, and real-world deployment challenges. Would love to hear feedback or suggestions to improve it further! #golang #nodejs #react #backend #systemdesign #performance #loadtesting #softwareengineering #opentowork
To view or add a comment, sign in
-
Building and deploying an application shouldn't get you stuck on managing infrastructure. We created Specific so your coding agent can define the infrastructure alongside the application code, run everything locally, and deploy it to production. To show what that looks like, we built a full-stack task board from scratch using nothing but Claude Code prompts: → Frontend (React + Vite) → Backend API (Go) → Database (Postgres) → Authentication → Real-time sync across browsers → File storage → Background jobs Every feature follows the same pattern: Prompt → specific dev → specific deploy Full walkthrough in the comments 👇
To view or add a comment, sign in
-
-
Recently, I wrote about the problems with stitching together multiple platforms to run your stack. This is what building and deploying a full-stack app looks like with Specific. Frontend, backend, database, auth, real-time sync, file storage and background jobs. All defined in one config file by your coding agent, runs locally with a single command, deployed to production with another. Read the blog post to learn more 👇
Building and deploying an application shouldn't get you stuck on managing infrastructure. We created Specific so your coding agent can define the infrastructure alongside the application code, run everything locally, and deploy it to production. To show what that looks like, we built a full-stack task board from scratch using nothing but Claude Code prompts: → Frontend (React + Vite) → Backend API (Go) → Database (Postgres) → Authentication → Real-time sync across browsers → File storage → Background jobs Every feature follows the same pattern: Prompt → specific dev → specific deploy Full walkthrough in the comments 👇
To view or add a comment, sign in
-
-
Most developers use 1 way to upload files to S3. There are actually 5. And picking the wrong one is costing you money. 💸 Here's the complete breakdown: 1️⃣ Presigned URL PUT Client uploads DIRECTLY to S3 using a time-limited URL from your server. → Best for: 90% of user uploads → Why: File never touches your server = zero bandwidth cost 2️⃣ Server Upload (Multer → S3) File hits your server → validated → pushed to S3. → Best for: When you need virus scans or image processing → Why: Full control before anything reaches storage 3️⃣ Multipart Upload File split into chunks, uploaded in parallel, reassembled by S3. → Best for: Files > 100MB, video uploads → Why: Resumable, parallel, handles up to 5TB 4️⃣ POST Policy HTML form POSTs directly to S3 with a server-signed policy. → Best for: No-JS form uploads → Why: S3 enforces constraints without extra backend logic 5️⃣ AWS SDK v3 (Backend only) Direct SDK commands from your server/Lambda. → Best for: Migrations, automation, cron jobs → Why: No human in the loop The rule is simple: 📦 Small user files → Presigned URL 🔍 Need validation → Server Upload 🎬 Large/video files → Multipart 📝 HTML forms → POST Policy 🤖 Automation → SDK v3 #AWS #CloudArchitecture #Backend #S3 #JavaScript #NodeJS #DevTips #SoftwareEngineering
To view or add a comment, sign in
-
-
NestJS Middleware – Structured Request Pre-Processing Middleware in NestJS is executed before route handlers and is primarily used for low-level request processing tasks. What you can do with Middleware: - Authentication pre-checks (JWT parsing, session validation) - Request logging and tracing (IP, headers, request ID) - Parsing cookies, headers, and body data - Attaching metadata/context to request (e.g., correlation ID) - Rate limiting and basic security checks Why it matters: Middleware is ideal for handling lightweight, early-stage concerns before the request enters deeper layers like Guards, Pipes, and Interceptors. Why it’s easier & better than raw Express: - Scoped and organized via modules (not globally scattered) - Clear separation between middleware, guards, and interceptors - Better maintainability and readability - Consistent architecture across teams and services In real world: In production environments, Middleware is often used for request tracing, multi-tenant context injection, and security layers. Combined with other NestJS components, it forms a robust and scalable request lifecycle pipeline. #NestJS #Middleware #NodeJS #BackendDevelopment #SoftwareArchitecture #CleanCode #APIDesign #TypeScript #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 14 of Backend Secrets Your fast code can still make your system slow You optimized your code. - Reduced loops. - Improved logic. Everything runs in milliseconds locally. But in production? - Your API still takes 2 seconds. Why? - Because backend performance is not just about your code. Here’s what actually slows things down: 1️⃣ #DatabaseQueries Even perfect code becomes slow if your query takes 500ms. 2️⃣ #NetworkLatency Your request travels across networks, services, and regions. 3️⃣ #ExternalAPIs Calling third-party services adds unpredictable delays. 4️⃣ #Serialization Converting large data into JSON takes time. 5️⃣ #ColdStarts / Infra Delays Servers waking up, containers starting, etc. So your “fast code” might only be 10% of total response time. The rest is everything around it. Backend secret: - You don’t optimize systems by writing faster code. - You optimize them by reducing dependencies and waiting time. #BackendSecrets #BackendDevelopment #SystemDesign Rahul Maheshwari #Performance #SoftwareEngineering #BuildInPublic #Developers
To view or add a comment, sign in
-