Building and deploying an application shouldn't get you stuck on managing infrastructure. We created Specific so your coding agent can define the infrastructure alongside the application code, run everything locally, and deploy it to production. To show what that looks like, we built a full-stack task board from scratch using nothing but Claude Code prompts: → Frontend (React + Vite) → Backend API (Go) → Database (Postgres) → Authentication → Real-time sync across browsers → File storage → Background jobs Every feature follows the same pattern: Prompt → specific dev → specific deploy Full walkthrough in the comments 👇
Specific (YC F25)’s Post
More Relevant Posts
-
Recently, I wrote about the problems with stitching together multiple platforms to run your stack. This is what building and deploying a full-stack app looks like with Specific. Frontend, backend, database, auth, real-time sync, file storage and background jobs. All defined in one config file by your coding agent, runs locally with a single command, deployed to production with another. Read the blog post to learn more 👇
Building and deploying an application shouldn't get you stuck on managing infrastructure. We created Specific so your coding agent can define the infrastructure alongside the application code, run everything locally, and deploy it to production. To show what that looks like, we built a full-stack task board from scratch using nothing but Claude Code prompts: → Frontend (React + Vite) → Backend API (Go) → Database (Postgres) → Authentication → Real-time sync across browsers → File storage → Background jobs Every feature follows the same pattern: Prompt → specific dev → specific deploy Full walkthrough in the comments 👇
To view or add a comment, sign in
-
-
Every developer has that one bug that haunts them for days. Mine was a 404 error that only appeared in production. Localhost? Perfect. Staging? Flawless. Production? Broken. Spent 6 hours debugging. Checked routes, middleware, server configs, DNS settings. Nothing worked. Then I noticed it. A single trailing slash in the API endpoint. One character. That's all it was. The backend expected /api/users — I was sending /api/users/ The frustrating part? The error message gave zero clues. Just a generic 404. No stack trace, no helpful logs. Here's what I learned: Always log request URLs in production. Not just errors — actual incoming requests. It saves hours of guesswork. Use strict routing rules or normalize URLs on both ends. Don't assume the client and server will always match. Test edge cases. Add a slash, remove a slash, add random params. Break your own code before users do. That one-character bug taught me more about API design than any tutorial ever did. What's the smallest bug that cost you the most time? #WebDevelopment #SoftwareEngineering #DebuggingLife #APIDesign #DeveloperStories #LearnInPublic
To view or add a comment, sign in
-
We’ve just shipped 🚀 @𝗱𝗳𝘀𝘆𝗻𝗰/𝗰𝗹𝗶𝗲𝗻𝘁 𝟬.𝟳.𝘅. This release is focused on something that usually gets overlooked — observability in service-to-service communication. When requests start failing in production, it’s rarely about “did it fail?” It’s about: - how long did it take? - how many retries happened? - why did we retry? - what actually changed between attempts? 𝗜𝗻 𝟬.𝟳.𝘅 𝘄𝗲 𝗺𝗮𝗱𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁. You now get: - request timing metadata (startedAt, endedAt, durationMs) - retry metadata (attempt, delay, reason) - support for Retry-After header - retry lifecycle hook (onRetry) - clear retry delay source (backoff vs retry-after) This makes debugging, monitoring, and reasoning about distributed systems much more predictable. Next step → integration safety (0.8.x): - response validation - idempotency keys Step by step, making service-to-service communication more reliable. NPM: https://lnkd.in/dWtHvhNS --- GitHub: https://lnkd.in/d2Drgx4i #nodejs #typescript #microservices #backend #softwareengineering #observability #distributedSystems #opensource
To view or add a comment, sign in
-
-
Introducing Spring Idempotency Kit — prevent duplicate operations in distributed systems with a single annotation. Duplicate payments. Double-created orders. Retried webhooks firing twice. If you've built microservices, you've fought these bugs. We built Spring Idempotency Kit to solve this once and for all. @PostMapping("/payments") @Idempotent(key = "#request.idempotencyKey") public PaymentResponse process(@RequestBody PaymentRequest request) { return paymentService.charge(request); } That's it. One annotation. Your method now executes exactly once per key. What's under the hood: → Redis-backed distributed locking with atomic SET NX → Automatic response caching — retries return the cached result instantly → Two concurrency strategies: REJECT (409 Conflict) or WAIT (block until result is ready) → Fail-open design — Redis goes down, your app keeps running → Dual key resolution: SpEL expressions or HTTP headers → Configurable TTL per method → Spring Boot 3.x auto-configuration with zero boilerplate Built for real-world scenarios: • Payment processing — no more double charges • Order creation APIs — safe client retries • Long-running report generation — concurrent callers wait for the result • Webhook handlers — process each event exactly once Open source under Apache 2.0. Java 21+, Spring Boot 3.4+. GitHub: https://lnkd.in/eYHEFXSH Star it, try it, break it. PRs welcome. #SpringBoot #Java #OpenSource #DistributedSystems #Microservices #Idempotency #BackendEngineering
To view or add a comment, sign in
-
I copy-pasted my auth middleware across 3 different projects and never questioned it. When I picked up NestJS, Guards felt like unnecessary complexity at first. Why not just write a middleware function like I always did? Took me a bit to actually get the difference. Express middleware is generic. It runs, does its thing, calls next(). It has no idea what controller or handler it's going to. It's just in the pipeline. Guards run later in the lifecycle and they have the execution context. They know what's about to be called. So your auth logic can actually be tied to the route it's protecting, not just floating somewhere in a middleware chain. In practice, instead of stacking middleware on routers or repeating checks inside handlers, I just put @UseGuards(IsAdminOrTeacher) on the method. The intent sits right there in the code. Middleware still makes sense for things that genuinely don't care about route context, like logging or parsing. But for authorization, Guards fit better. Not because middleware can't do it, it can, but because Guards are built for exactly this. Both solve the same problem. One just communicates the intent more clearly. #NestJS #NodeJS #BackendDevelopment #ExpressJS #LearningInPublic
To view or add a comment, sign in
-
📈 Part 11 The API was working, tested, secured, and containerized. But somewhere along the way I realized if something breaks, I have absolutely no way to know what happened or when. That question pushed me toward something I had only read about until now 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗟𝗼𝗴𝗴𝗶𝗻𝗴 using 𝗪𝗶𝗻𝘀𝘁𝗼𝗻 and 𝗠𝗼𝗿𝗴𝗮𝗻. ✨ Key takeaways from this phase : • Why console.log is not production logging Before this I was using console.error inside the error handler and thinking that was enough. It is not. It has no timestamps, no severity levels, no file output. The moment I saw structured log output with timestamps and levels, the difference was immediately obvious. • Log levels clicked for me practically Reading about error, warn, info, http, debug as concepts felt abstract. Actually seeing the logs during testing every request at http level, every 404 at error level with a full stack trace made the whole thing concrete. Development logs everything. Production logs only what matters. • Morgan and Winston serve different purposes Morgan watches HTTP traffic. Winston handles everything else. Piping Morgan into Winston so both write to the same log files was a small thing technically but made me understand how logging systems are actually designed in real applications. • What an audit trail actually means After this phase every error in my API logs the message, the HTTP method, the URL it came from, and the full stack trace. If something breaks I can open a file and know exactly what happened. 2026-03-27 12:42:51 [INFO]: App configured successfully 2026-03-27 12:42:51 [HTTP]: GET / HTTP/1.1 200 2026-03-27 12:42:51 [ERROR]: Task not found PUT /tasks/99999 This phase also marks the completion of the entire project.✨ Started with a basic Express server and ended with a production-style backend covering authentication, validation, testing, security, containerization, and logging. 🙌🏻 #BackendDevelopment #NodeJS #ExpressJS #PostgreSQL #Winston #Logging #RESTAPI #Learning #HandsOnLearning #Preparation #SoftwareDevelopment #fullStackDevelopment
To view or add a comment, sign in
-
-
Today’s class was very informative as we explored the backend project structure in depth. We learned how folders and files are organized and how different parts like controllers, middleware, models, routes, services, and API responses work together. #chaicode #WebDevelopment #BackendDevelopment We also understood how to connect a database with the backend, which is a very important step in building real-world applications. This gave me a clear picture of how a complete backend system is structured and handled. Learning these concepts is making backend development more interesting and practical Piyush Garg #NodeJS #ExpressJS #Database #BuildInPublic #LearningJourney
To view or add a comment, sign in
-
-
Backend Concepts Series #1 — What I misunderstood about HTTP When I first started backend development, I was using HTTP every day through APIs. But honestly, I didn’t really understand how it worked underneath. Now while revising core concepts, I’m writing down notes in a simpler way — sharing them here in case it helps someone who is where I was a year ago. Concept: HTTP is stateless Common misconception: “Once I log in, the server remembers me” Reality: HTTP does not remember anything. Every request is treated independently. What actually happens? Let’s say you: - log in - open your dashboard - fetch some data From the server’s perspective, each request is like: “I have never seen this user before” Then how does login work? We add state manually on top of HTTP using: - Cookies - Sessions - Tokens (JWT) Simple flow: 1. You log in 2. Server verifies credentials 3. Server gives you a token or session ID 4. You send it with every request 5. Server uses it to identify you Why this matters If this concept is not clear, you’ll struggle with: - Authentication bugs - Session vs JWT decisions - Understanding how scaling works - Debugging “random” auth failures Takeaway: HTTP is stateless by design. Authentication makes it behave stateful. This is a very basic concept, and many of you may already know it — but getting this clarity early makes a big difference later.
To view or add a comment, sign in
-
𝙉𝙜𝙞𝙣𝙭 𝙬𝙖𝙨 𝙧𝙪𝙣𝙣𝙞𝙣𝙜. 𝙋𝙤𝙧𝙩 80 𝙬𝙖𝙨 𝙤𝙥𝙚𝙣. 𝙏𝙝𝙚 𝙖𝙥𝙥𝙡𝙞𝙘𝙖𝙩𝙞𝙤𝙣 𝙬𝙖𝙨 𝙧𝙚𝙨𝙥𝙤𝙣𝙙𝙞𝙣𝙜 𝙬𝙝𝙚𝙣 𝙄 𝙩𝙚𝙨𝙩𝙚𝙙 𝙞𝙩 𝙙𝙞𝙧𝙚𝙘𝙩𝙡𝙮. 𝘽𝙪𝙩 𝙩𝙝𝙚 𝘼𝙇𝘽 𝙠𝙚𝙥𝙩 𝙢𝙖𝙧𝙠𝙞𝙣𝙜 𝙚𝙫𝙚𝙧𝙮 𝙞𝙣𝙨𝙩𝙖𝙣𝙘𝙚 𝙖𝙨 𝙪𝙣𝙝𝙚𝙖𝙡𝙩𝙝𝙮. I checked the instances. They were fine. I checked the security groups, checked the ALB listener rules. Everything looked right. The ALB still refused to send traffic to any of them. The issue was with the health check configuration. 𝘛𝘩𝘦 𝘈𝘓𝘉 𝘸𝘢𝘴 𝘤𝘩𝘦𝘤𝘬𝘪𝘯𝘨 𝘵𝘩𝘦 𝘳𝘰𝘰𝘵 𝘱𝘢𝘵𝘩. 𝘌𝘷𝘦𝘳𝘺 30 𝘴𝘦𝘤𝘰𝘯𝘥𝘴 𝘪𝘵 𝘸𝘢𝘴 𝘴𝘦𝘯𝘥𝘪𝘯𝘨 𝘢 𝘳𝘦𝘲𝘶𝘦𝘴𝘵 𝘵𝘰 "/" 𝘰𝘯 𝘦𝘢𝘤𝘩 𝘪𝘯𝘴𝘵𝘢𝘯𝘤𝘦 𝘢𝘯𝘥 𝘸𝘢𝘪𝘵𝘪𝘯𝘨 𝘧𝘰𝘳 𝘢 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦. 𝘛𝘩𝘦 𝘪𝘴𝘴𝘶𝘦 𝘪𝘴 𝘵𝘩𝘢𝘵 "/" 𝘰𝘯 𝘕𝘨𝘪𝘯𝘹 𝘸𝘢𝘴 𝘴𝘦𝘵 𝘶𝘱 𝘵𝘰 𝘱𝘳𝘰𝘹𝘺 𝘳𝘦𝘲𝘶𝘦𝘴𝘵𝘴 𝘵𝘩𝘳𝘰𝘶𝘨𝘩 𝘵𝘰 𝘵𝘩𝘦 𝘛𝘰𝘮𝘤𝘢𝘵 𝘣𝘢𝘤𝘬𝘦𝘯𝘥. 𝘚𝘰 𝘦𝘷𝘦𝘳𝘺 𝘩𝘦𝘢𝘭𝘵𝘩 𝘤𝘩𝘦𝘤𝘬 𝘸𝘢𝘴 𝘨𝘰𝘪𝘯𝘨 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘸𝘢𝘺 𝘵𝘩𝘳𝘰𝘶𝘨𝘩 𝘵𝘰 𝘵𝘩𝘦 𝘣𝘢𝘤𝘬𝘦𝘯𝘥, 𝘸𝘢𝘪𝘵𝘪𝘯𝘨 𝘧𝘰𝘳 𝘢 𝘧𝘶𝘭𝘭 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦, 𝘢𝘯𝘥 𝘵𝘪𝘮𝘪𝘯𝘨 𝘰𝘶𝘵 𝘣𝘦𝘧𝘰𝘳𝘦 𝘪𝘵 𝘨𝘰𝘵 𝘰𝘯𝘦. The fix was one line. Change the health check path from "/" to "/health". The "/health" endpoint on Nginx returns a simple 200 response immediately without touching the backend at all. It exists specifically for this purpose. I made the change. Within 2 minutes every instance was showing healthy in the target group and the ALB started routing traffic normally. I knew what health checks were. I had configured one. What I did not catch is that the path I set was routing every check through to the Tomcat backend before returning a response. So every 30 seconds the ALB was waiting for the full application stack to respond just to confirm that Nginx was alive. When the backend was slow, Nginx looked unhealthy. They were not the same thing but the health check was treating them like they were. 𝘾𝙝𝙖𝙣𝙜𝙞𝙣𝙜 𝙩𝙝𝙚 𝙥𝙖𝙩𝙝 𝙩𝙤 "/𝙝𝙚𝙖𝙡𝙩𝙝" 𝙢𝙖𝙙𝙚 𝙩𝙝𝙚 𝙘𝙝𝙚𝙘𝙠 𝙧𝙚𝙩𝙪𝙧𝙣 𝙞𝙢𝙢𝙚𝙙𝙞𝙖𝙩𝙚𝙡𝙮 𝙬𝙞𝙩𝙝𝙤𝙪𝙩 𝙩𝙤𝙪𝙘𝙝𝙞𝙣𝙜 𝙖𝙣𝙮𝙩𝙝𝙞𝙣𝙜 𝙙𝙤𝙬𝙣𝙨𝙩𝙧𝙚𝙖𝙢. 𝙄𝙣𝙨𝙩𝙖𝙣𝙘𝙚𝙨 𝙬𝙚𝙣𝙩 𝙝𝙚𝙖𝙡𝙩𝙝𝙮 𝙞𝙣 2 𝙢𝙞𝙣𝙪𝙩𝙚𝙨. This is part of a series on a production-grade 3-tier Java architecture I built on AWS. This week covers the bugs that taught me the most. #devOps #buildinginpublic #day25 Repo: https://lnkd.in/dQx6_rxb
To view or add a comment, sign in
-
Just spent the past 2 hours going through the Claude Managed Agents docs and I’m more inspired than ever! A few initial thoughts: > Devs have a job again 😅it’s a fairly technical setup but that being said you can also give Claude a skill + the new ant cli and ask it to figure a lot out for you > It seems to be entirely reactive (for now) and has no built-in trigger primitives like running on a schedule > A little confused why the custom tool calls can’t also be uploaded to the managed infrastructure. Katelyn Lesse could you maybe answer what the limitations were of this? IMO makes the code more distributed and harder to maintain but gives you more control over running the custom tool calls yourself > I love that SQLite is fully available locally in the container. opens up some neat patterns for stateful single-session agents that need a real query layer without spinning up external infra. (Note: Postgres/Redis aren’t running by default - you get the client tools to connect out, not the servers themselves) > I love the architecture underneath it: the model, disposable Linux containers, and a durable session event log are decoupled and scale independently. Container crashes? Spin up a fresh one, the agent keeps going. System restarts? The session log picks up where it left off. It’s the same separation of concerns backend engineers have relied on for years and how we handle our architecture at Trusted Shops (stateless workers + durable queues + append-only logs) What’s the first thing you will build with this and what do you think of managed agents vs the messages api?
To view or add a comment, sign in
Read the blog post here: https://specific.dev/blog/build-with-specific-task-board