AI won't replace engineers. But engineers who ship 5x faster & safer will replace those who don't. I've been shipping code with AI assistance at AWS since 2024. But it took me a few weeks to figure out how to actually use AI tools without fighting them. Most of what made the difference isn't in any tutorial. It's the judgment you build by doing. Here's what worked for me: 1. Take the lead. •) AI doesn't know your codebase, your team's conventions, or why that weird helper function exists. You do. Act like the tech lead in the conversation. •) Scope your asks tightly. "Write a function that takes a list of user IDs and returns a map of user ID to last login timestamp" works. "Help me build the auth flow" gets you garbage. •) When it gives you code, ask it to explain the tradeoffs. 2. Use it for the boring & redundant things first •) Unit tests are the easiest win. Give it your function, tell it the edge cases you care about, let it generate the test scaffolding. •) Boilerplate like mappers, config files, CI scripts. Things that take 30 minutes but need zero creativity. •) Regex is where AI shines. Describe what you want to match and it hands you a working pattern in seconds. •) Documentation too. Feed it your code, ask for inline comments or a README draft. You'll still edit it, but the first draft is free. 3. Know when to stop prompting and start coding •) AI hallucinates confidently. It will tell you a method exists when it doesn't. It will invent API parameters. Trust but verify. •) Some problems are genuinely hard. Race conditions, complex state management, weird legacy interactions. AI can't reason about your system the way you can. •) use AI to get 60-70% there fast, then take over. The last 30% is where your judgment matters. 4. Build your own prompt library •) Always include language, framework, and constraints. "Write this in Python <desired-version>, no external dependencies, needs to run in Lambda" gets you usable code. "Write this in Python" gets you a mess. •) Context is everything. Paste the relevant types, the function signature, the error message. The more AI knows, the less you fix. •) Over time, you'll develop intuition for what AI is good at and what it's bad at. That intuition is the core skill. AI tools are multipliers. If your fundamentals are weak, they multiply confusion. If your fundamentals are strong, they multiply speed & output. Learn to work with them, it will give you a ton of ROI.
How to Use AI for Manual Coding Tasks
Explore top LinkedIn content from expert professionals.
Summary
AI tools are transforming manual coding tasks by acting as smart assistants that help automate repetitive work, write code, and structure projects. Using AI for manual coding means working alongside intelligent software that can generate code, plan tasks, and catch errors—freeing up developers to focus on the big picture and creative problem solving.
- Guide your AI: Be clear and specific about what you want by providing context like programming language, project requirements, and code samples for the AI to deliver useful results.
- Automate routine work: Use AI to quickly handle repetitive coding chores such as writing unit tests, generating boilerplate code, or drafting documentation, so you can spend more time on complex features.
- Plan and review: Break down coding tasks into smaller steps, have the AI help draft plans or design documents, and review the AI’s output carefully before moving forward to catch mistakes early.
-
-
AI coding assistants are changing the way software gets built. I've recently taken a deep dive into three powerful AI coding tools: Claude Code (Anthropic), OpenAI Codex, and Cursor. Here’s what stood out to me: Claude Code (Anthropic) feels like a highly skilled engineer integrated directly into your terminal. You give it a natural language instruction, like a bug to fix or a feature to build and it autonomously reads through your entire codebase, plans the solution, makes precise edits, runs your tests, and even prepares pull requests. Its strength lies in effortlessly managing complex tasks across large repositories, making it uniquely effective for substantial refactors and large monorepos. OpenAI Codex, now embedded within ChatGPT and also accessible via its CLI tool, operates as a remote coding assistant. You describe a task in plain English, it uploads your project to a secure cloud sandbox, then iteratively generates, tests, and refines code until it meets your requirements. It excels at quickly prototyping ideas or handling multiple parallel tasks in isolation. This approach makes Codex particularly powerful for automated, iterative development workflows, perfect for agile experimentation or rapid feature implementation. Cursor is essentially a fully AI-powered IDE built on VS Code. It integrates deeply with your editor, providing intelligent code completions, inline refactoring, and automated debugging ("Bug Bot"). With real-time awareness of your codebase, Cursor feels like having a dedicated AI pair programmer embedded right into your workflow. Its agent mode can autonomously tackle multi-step coding tasks while you maintain direct oversight, enhancing productivity during everyday coding tasks. Each tool uniquely shapes development: Claude Code excels in autonomous long-form tasks, handling entire workflows end-to-end. Codex is outstanding in rapid, cloud-based iterations and parallel task execution. Cursor seamlessly blends AI support directly into your coding environment for instant productivity boosts. As AI continues to evolve, these tools offer a glimpse into a future where software development becomes less about writing code and more about articulating ideas clearly, managing workflows efficiently, and letting the AI handle the heavy lifting.
-
I shipped 100,000 lines of high-quality code in 2 weeks using AI coding agents. But here's what nobody talks about: we're deploying AI coding tools without the infrastructure they need to actually work. When we onboard a developer, we give them documentation, coding standards, proven workflows, and collaboration tools. When we "deploy" a coding agent, we give them nothing and ask them to spend time changing their behavior and workflows on top of actively shipping code. So I compiled what I'm calling AI Coding Agent Infrastructure or the missing support layer: • Skills with mandatory skill checking that makes it structurally impossible for agents to rationalize away test-driven development (TDD) or skip proven workflows (Credits: Superpowers Framework by Jesse Vincent, Anthropic Skills, custom prompt-engineer skill based on Anthropic’s prompt engineering overview). • 114+ specialized sub-agents that work in parallel (up to 50 at once) like Backend Developer + WebSocket Engineer + Database Optimizer running simultaneously, not one generalist bottleneck (Credits: https://lnkd.in/dgfrstVq) • Ralph method for overnight autonomous development (Credits: Geoffrey Huntley, repomirror project https://lnkd.in/dXzAqDGc) This helped drive my coding agent output from inconsistent to 80% of the way there, enabling me to build at a scale like never before. Setup for this workflow takes you 5 minutes. A single prompt installs everything across any AI coding tool (Cursor, Windsurf, GitHub Copilot, Claude Code). I'm open sourcing the complete infrastructure and my workflow instructions today. We need better developer experiences than being told to "use AI tools" or manually put all of these pieces together without the support layer to make them actually work. PRs are welcome, whether you're building custom skills, creating domain-specific sub-agents, or finding better patterns. Link to repo: https://lnkd.in/dfm4NAmh Full breakdown of workflow here: https://lnkd.in/dr9c-UX3 What patterns have you found make the biggest difference in your coding agent productivity?
-
One AI coding hack that helped me 15x my development output: using design docs with the LLM. Whenever I’m starting a more involved task, I have the LLM first fill in the content of a design doc template. This happens before a single line of code is written. The motivation is to have the LLM show me it understands the task, create a blueprint for what it needs to do, and work through that plan systematically.. –– As the LLM is filing in the template, we go back-and-forth clarifying its assumptions and implementation details. The LLM is the enthusiastic intern, I’m the manager with the context. Again no code written yet. –– Then when the doc is filled in to my satisfaction with an enumerated list of every subtask to do, I ask the LLM to complete one task at a time. I tell it to pause after each subtask is completed for review. It fixes things I don’t like. Then when it’s done, it moves on to the next subtask. Do until done. –– Is it vibe coding? Nope. Does it take a lot more time at the beginning? Yes. But the outcome: I’ve successfully built complex machine learning pipelines that run in production in 4 hours. Building a similar system took 60 hours in 2021 (15x speedup). Hallucinations have gone down. I feel more in control of the development process while still benefitting from the LLM’s raw speed. None of this would have been possible with a sexy 1-prompt-everything-magically-appears workflow. –– How do you get started using LLMs like this? @skylar_b_payne has a really thorough design template: https://lnkd.in/ewK_haJN –– You can also use shorter ones. The trick is just to guide the LLM toward understanding the task, providing each of the subtasks, and then completing each subtask methodically. –– Using this approach is how I really unlocked the power of coding LLMs.
-
Most developers treat AI coding agents like magical refactoring engines, but few have a system, and that's wrong. Without structure, coding with tools like Cursor, Windsurf, and Claude Code often leads to files rearranged beyond recognition, subtle bugs, and endless debugging. In my new post, I share the frameworks and tactics I developed to move from chaotic vibe coding sessions to consistently building better, faster, and more securely with AI. Three key shifts I cover: -> Planning like a PM – starting every project with a PRD and modular project-docs folder radically improves AI output quality -> Choosing the right models – using reasoning-heavy models like Claude 3.7 Sonnet or o3 for planning, and faster models like Gemini 2.5 Pro for focused implementation -> Breaking work into atomic components – isolating tasks improves quality, speeds up debugging, and minimizes context drift Plus, I share under-the-radar tactics like: (1) Using .cursor/rules to programmatically guide your agent’s behavior (2) Quickly spinning up an MCP server for any Mintlify-powered API (3) Building a security-first mindset into your AI-assisted workflows This is the first post in my new AI Coding Series. Future posts will dive deeper into building secure apps with AI IDEs like Cursor and Windsurf, advanced rules engineering, and real-world examples from my projects. Post + NotebookLM-powered podcast https://lnkd.in/gTydCV9b
-
After spending 1000+ hours coding with AI in Cursor, here's what I learned: 1️⃣ Treat AI like your forgetful genius friend, brilliant but always needing reminders of your goals. 2️⃣ Context rules everything. Regularly reset, condense, and document your sessions. Your efficiency skyrockets when context is clear. 3️⃣ Start by sharing your vision. AI can read code but not minds; clarity upfront saves countless revisions. 4️⃣ Premium models pay off. Gemini 2.5 Pro (1M tokens) or Claude 4 Sonnet are worth every penny when tackling tough problems. 5️⃣ Brief AI as you would onboard a junior dev, clearly explain architecture, constraints, and goals upfront. 6️⃣ Leverage rules files as your hidden superpower. Preset your coding patterns and workflows to start smart every time. 7️⃣ Collaborate with AI first. Discuss and validate ideas before writing any code; it dramatically reduces wasted effort. 8️⃣ Keep everything documented. Markdown-based project logs make complex tasks manageable and ensure seamless handovers. 9️⃣ Watch your context window closely. After halfway, productivity dips, stay sharp with quick resets and concise summaries. 🔟 Version-control your rules. Team-wide knowledge-sharing ensures consistent quality and rapid onboarding. If these insights help you level up, ♻️ reshare to boost someone else's AI coding skills today!
-
I started coding again when the first ChatGPT launched in November 2022—curiosity turned into obsession. Since then, I’ve tried nearly every AI coding tool out there. Recently, I’ve become hooked on Cursor. It’s common to see two extremes: • New/junior devs often overestimate what AI can do. • Senior engineers usually distrust it entirely. Both are wrong! The sweet spot is using AI as an empowering partner, not a full dev replacement. You’re still in control—AI can help you go faster and think deeper, but only if you stay in the loop. After months of heavy use, here are some practical tips and a prompt sequence I rely on for deep code reviews and debugging in Cursor 👇 ⸻ 🔁 1. LLMs have no memory. Every chat is stateless. If you close the tab or start a new thread, you must reintroduce the code context—especially for complex systems. 📌 2. Think in steps, not monolith prompts. Work in multi-step prompts within the same chat session. Review each output before proceeding. ⚠️ 3. LLMs tend to do more than asked. Start by asking: “What are you going to do?” Then approve and ask: “Now do only that.” 💾 4. Commit before you go. Save your last working state. AI edits can be powerful—and sometimes destructive. 🧠 5. Use the right model for the job. • Lightweight stuff → Sonnet 4 • Deep analysis or complex refactoring → Opus 4 or O3 (these cost more, but they’re worth it) ⸻ 👨💻 Prompt Workflow Example: Reviewing a Complex App with Legacy Code Here’s a sequence I use inside a single Cursor chat session: ⸻ 🧩 Prompt 1 “As a senior software architect, review this app. Focus on [e.g. performance, architecture, state management, UI]. Provide an .md doc with findings, code diagrams, and flow logic.” ✅ Carefully review what’s generated. Correct or expand anything that feels off. Save it for reuse. ⸻ 🔍 Prompt 2 “Based on this understanding, identify the top 5 most critical issues in the app—explain their impact and urgency.” Ask for clarification or expansion if needed. ⸻ 💡 Prompt 3 “For issue #3, suggest 2–3 possible solutions (no code yet). For each, list pros/cons and outline what needs to change.” Choose the most viable solution. ⸻ 🛠️ Prompt 4 “Now implement the selected solution step by step. After each step, run ESLint (and if available, unit tests).” ⸻ 🔬 Pro tip: Ask Cursor to generate a full unit test suite before editing. Then validate every change via tests + linting. ⸻ This is how I use AI coding tools today: as a thought partner and execution aid, not a replacement. Would love to hear your workflows too. #CursorIDE #PromptEngineering l #DeveloperTips #CodingWithAI
-
In the last 4 months, I've tried and tested 7 coding AI Agents Here are my top 5, and here's when to use them as well... Coding agents cannot be ignored now. These agents are not only moving markets but are the core foundation of AI-Native companies in 2026. And in the last 4 months, I tested major coding agents, from open source options like Open Code, Cline, to paid options like Cursor. 📌 Let me break down all 5 coding agents so you get to when to use which one: 1\ OpenAI Codex - Cloud-based coding agent that runs tasks in isolated sandboxes via CLI - Best for: Background/async tasks, parallel agents, CI/CD pipelines - Use when: You need to automate large-scale coding tasks without touching the IDE Quick Start Guide: https://lnkd.in/gKUgFnPH 2\ Claude Code - Anthropic's terminal-based agentic coding tool — works directly in your codebase - Best for: Large refactors, multi-file edits, complex debugging - Use when: You live in the terminal and need deep, repo-level reasoning Quick Start Guide: https://lnkd.in/gSAYPN4b 3\ GitHub Copilot - AI pair programmer embedded across VS Code and the entire GitHub ecosystem - Best for: Inline autocomplete, quick snippets, PR reviews - Use when: You want frictionless suggestions without changing your existing workflow Quick Start Guide:https://lnkd.in/gdmERrn5 4\ Cursor - AI-native code editor (fork of VS Code) with deep codebase understanding - Best for: Complex cross-platform testing, faster edits across multiple files - Use when: You want an AI-first editor that understands your full project context Quick Start Guide: https://lnkd.in/gf8jbtdv 5\ Antigravity - Google's autonomous code editor — agent-first IDE (fork of VS Code) powered by Gemini 3 - Best for: End-to-end task execution, native Google model APIs, browser-based testing - Use when: You want to act as an architect and delegate full tasks to autonomous agents Quick Start Guide: https://lnkd.in/gQXQbHHk 📌 Quick decision guide: 1\ Need async, sandboxed task automation → Codex 2\ Terminal-first, large codebase refactoring → Claude Code 3\ Daily autocomplete within VS Code/GitHub → Copilot 4\ AI-native editor with deep project context → Cursor 5\ Orchestrate multiple agents end-to-end → Antigravity If you want to understand AI agent concepts deeper, my free newsletter breaks down everything you need to know: https://lnkd.in/g5-QgaX4 Save 💾 ➞ React 👍 ➞ Share ♻️ & follow for everything related to AI Agents
-
AI coding LLMs and tools are improving rapidly. There is a massive amount of value and velocity teams can unlock by using them correctly. One reminder I recently shared internally at Productboard that’s worth repeating more broadly👇 It’s critical to start with a strong product specification. Spend the first 1–2 hours iterating on the spec definition to ensure all requirements are clear and there are no surprises mid-implementation. A few practical tips on how to do that: 🔹 Paste (or even better, pull via MCP) the specs you got from your PM into a Markdown file 🔹 Ask Claude: “Ask me any questions needed to make sure you deeply understand the feature we will be building.” You might get 40–60 questions back - ideally use something like WhisperFlow so you don’t spend the next two hours just answering them 🔹 Ask Claude: “Propose three very different approaches to building this feature and explain their pros and cons in terms of complexity, maintainability, and user value.” Then iterate toward the approach that makes the most sense 🔹 Ask Claude: “Research the codebase, put together an implementation plan for this feature, and come back with additional product questions that need to be answered before implementation.” Context engineering is just as critical. A few tips there: 🔹 Use a “Research → Plan → Implement” staged flow, fully wiping the context window between each stage instead of relying on automatic compaction 🔹 Spend significant time reading, reviewing, and adjusting the outputs of each stage 🔹 Use research sub-agents heavily - you may need to explicitly prompt for this depending on the tool and LLM you’re using When it comes to implementation quality: 🔹 Make sure you truly understand every line of code you push into a PR 🔹 Having the agent walk you through the changes and explain non-obvious parts (especially around libraries or frameworks) is often a great idea Tooling matters more than ever: 🔹 Make sure you deeply understand the features and tricks of the coding tools you use - not easy when tools like Claude Code and Cursor ship updates almost daily 🔹 Invest in AI tooling configuration in your repos 🔹 Invest in better linters - the best teams are often doubling the number of linter rules compared to pre-AI days, giving agents fast and precise feedback 🔹 Constantly update your AGENTS.md / Claude.md files as you notice behaviors that should be adjusted - top teams update these almost daily And finally: 🔹 Share your tips and tricks with colleagues How are you and your teams approaching AI-assisted coding today? What practices have made the biggest difference for you so far?
-
AI coding assistants generate code faster than I can review it. That's not a flex, it's a problem. In the last few weeks I've been building what I'm calling a self-healing AI coding workflow. The idea is simple: instead of manually reviewing a bunch of AI slop, you give the coding agent a carefully structured framework for validating its own work so it fixes most bugs before you see them. I packaged it into a single Claude Code skill. It kicks off three parallel sub-agents that research your codebase, understand the database schema, and do a code review. Then it spins up the dev server, defines user journeys based on what it learned, and tests each one by navigating the actual UI with browser automation, querying the database to verify records, and taking screenshots along the way. When it finds a blocker, it fixes the code, retests, and moves on. When it finds smaller issues, it logs them for you to address later. At the end you get a structured report with everything it found, everything it fixed, and screenshots of every step. The point isn't perfection. It's reducing the mental drag of validation so that by the time control passes back to you, the big stuff is already handled. I just posted a full breakdown on YouTube showing the entire workflow in action, including how to plug it directly into your feature development process: https://lnkd.in/g86HuxYf