The faster your engineers ship with AI, the more code review becomes the bottleneck! freee saw it coming before it became a crisis! > 54% acceptance rate on critical issues > 32.8 weeks of reviewer time saved > Reduced manual code-review burden → https://lnkd.in/gdMst2Sq
CodeRabbit’s Post
More Relevant Posts
-
"Writing code is intelligence; knowing what to build is judgment" is an interesting take. In my daily work, writing code is now 100% AI. What I actually do all day is judgment, deciding what to build next, what code quality looks like, and whether to ship now or refine. The ratio has completely flipped. https://lnkd.in/gFDfbXVm
To view or add a comment, sign in
-
After 3 months of building with AI, I had working code and a broken process. The problem wasn't the AI. It was that knowledge evaporated between sessions, and architectural decisions didn't persist. I built FORGE to fix this — a methodology framework that scales process with complexity and treats governance as context, not friction. The result: ~90min overhead per feature, rework drops from 30-50% to <15%. Here's the full story, including what I got wrong first: https://lnkd.in/dRvHenxs
To view or add a comment, sign in
-
AI is helping us write more code than ever. More PRs, more commits, more features shipping faster. But code review has not kept up. It is still one of the biggest bottlenecks, and when people are juggling a lot of context, obvious things get missed. I recently enabled Claude Code reviews in GitHub Actions so every PR gets an automated review before a human picks it up. That is already useful because the developer gets feedback much faster, without waiting for someone else to come online or make time for a review. I honestly did not expect much beyond generic comments. What surprised me was that, with the right prompt, it was actually useful at catching logic issues and inconsistent patterns early. It is not replacing human review. It just means the first human pass starts from a better place. Less noise, less back-and-forth, and faster review cycles. Still early, but so far this has been one of the more practical AI additions to my workflow.
To view or add a comment, sign in
-
Two reasons to read this piece about AI from Sequoia: 1) understand the evolving difference between intelligence and judgment, 2) check where your profession and business belong in an interesting 2×2 matrix. https://lnkd.in/e3zgcWBm
To view or add a comment, sign in
-
I was pleased to share another article with Forbes Technology Council, discussing the need for a radically different approach to testing in the AI coding era. Beware black boxes, everyone. https://lnkd.in/eUPzcJMD
To view or add a comment, sign in
-
I’ve stopped writing code by hand. I hesitated to post this because it already feels obsolete — my workflow is evolving as I write this. Still, I break down how I currently work with AI agents end-to-end. More in the article. https://lnkd.in/d6EdqfXc
To view or add a comment, sign in
-
I have seen a lot of buzz lately about defects introduced to working code by AI. Some benchmarks show AI creating 1.7x more issues than humans. This is the worst AI code generation, and I expect this gap to close. Even if AI is much better than a human, any change to a codebase has the possibility of side effects. When looking at the cost of any code change to a system, the highest cost was often the human cost to develop the change. We would weigh this cost and decide if something was worth doing based on effort. The challenge is that this method is broken with AI code generation. AI continues to drive the cost to develop a change down. So what should we look at to gauge the cost of a change as the effort approaches zero? This is where we should look at the side effects. Any change, no matter how small, can introduce a defect into a codebase. A one-line fix can break an unrelated feature. A dependency update can cause subtle regressions. A well-intentioned refactor can introduce a security vulnerability or create a huge performance problem. This has always been part of the cost of a change in software development, but it is becoming the most important one. When evaluating if something is worth doing, keep in mind that just because the effort is becoming close to zero, it does not mean the cost is also near zero. Demonstrating that an AI can solve a problem in seconds does not mean that change is free. The true cost lies in the review, testing and issues you will have to deal with after that change goes out.
To view or add a comment, sign in
-
I couldn't agree more with Chris Dail. As the cost of writing code approaches zero, the only cost that remains is the cost of a change going wrong. So your ability to quickly determine the blast radius of any change, and understand the behavioral shift it introduces, is what actually matters now. And I'm not talking about static analysis or type checking. I mean running the actual code against real services and getting real results. Not guesses. Not approximations. Actual behavior. That's the new leverage point.
I have seen a lot of buzz lately about defects introduced to working code by AI. Some benchmarks show AI creating 1.7x more issues than humans. This is the worst AI code generation, and I expect this gap to close. Even if AI is much better than a human, any change to a codebase has the possibility of side effects. When looking at the cost of any code change to a system, the highest cost was often the human cost to develop the change. We would weigh this cost and decide if something was worth doing based on effort. The challenge is that this method is broken with AI code generation. AI continues to drive the cost to develop a change down. So what should we look at to gauge the cost of a change as the effort approaches zero? This is where we should look at the side effects. Any change, no matter how small, can introduce a defect into a codebase. A one-line fix can break an unrelated feature. A dependency update can cause subtle regressions. A well-intentioned refactor can introduce a security vulnerability or create a huge performance problem. This has always been part of the cost of a change in software development, but it is becoming the most important one. When evaluating if something is worth doing, keep in mind that just because the effort is becoming close to zero, it does not mean the cost is also near zero. Demonstrating that an AI can solve a problem in seconds does not mean that change is free. The true cost lies in the review, testing and issues you will have to deal with after that change goes out.
To view or add a comment, sign in
-
Great article out of Sequoia "Every founder building an AI tool is asking the same question: what happens when the next version of Claude makes my product a feature? They’re right to worry. If you sell the tool, you’re in a race against the model. But if you sell the work, every improvement in the model makes your service faster, cheaper, and harder to compete with." https://lnkd.in/er5RnYTt
To view or add a comment, sign in
Explore related topics
- Benefits of AI in Software Development
- Automated vs Manual Code Review for Developers
- Code Review Best Practices
- Benefits of Code Automation
- AI's Impact on Coding Productivity
- The Importance of Code Reviews in the Software Development Lifecycle
- Code Review Strategies
- How AI Improves Code Quality Assurance