The junior engineer sat across from me in a conference room that smelled like stale coffee and stress. He had just shipped a feature that was working in production. Tests passing. Metrics green. The kind of delivery that should have felt like a win. Instead, he looked exhausted and a little bit lost.
"I can't explain how it works," he said.
Not because he hadn't tried. Not because he was incapable. He had stared at the function for twenty minutes before our meeting, tracing the logic, trying to reconstruct the reasoning that had produced it. The code compiled. The tests passed. But when he tried to walk through it line by line, the understanding wasn't there. The logic felt borrowed. Alien. Like reading someone else's handwriting and realizing you don't recognize your own name.
The code was his. He had written it. But he hadn't written it.
The Friction Is Gone
Here's what he told me happened. He hit a problem he didn't fully understand, so he described it to an AI assistant. The AI generated plausible-looking code. It compiled. Tests passed. He shipped it. The whole cycle took a fraction of the time it would have taken him to wrestle with the problem himself, to get stuck, to unstick himself, to actually understand what he was building.
The friction that used to force understanding was gone.
I heard someone say it plain in a conversation that stuck with me: weak devs plus AI equals weak output ... faster. The junior engineers feeling this most acutely aren't struggling because AI is too hard. They're struggling because AI made it too easy to skip the work that builds judgment.
The friction that used to force understanding was gone.
What Gets Lost When Speed Wins
This isn't a story about one engineer having a bad week. Once I knew to look for it, I started noticing the pattern everywhere. PRs getting bigger. Review comments getting thinner. The energy behind the work shifting, and not in a direction that leads to better engineering.
I watched a thread of over 500 experienced engineers describe what happened when AI tools rolled out without thoughtful guardrails. The stories were remarkably consistent. PRs ballooned as AI generated more code than humans could reasonably review. Codebases filled with inconsistent patterns, each engineer prompting their way to a slightly different solution for the same problem. Service boundaries blurred. Error handling became a patchwork. The codebase started feeling less like an intentional system and more like a junk drawer with a CI/CD pipeline.
One engineer described it with a line I haven't been able to shake. The first sign wasn't that the team missed a date. It wasn't that quality cratered overnight. It started earlier and quieter than that. Engagement dropped first. People started pulling back. Then the quality of the thinking started to shift. Problem-solving got thinner. Engineers who used to wrestle hard with tradeoffs started giving quicker, flatter responses.
By the time performance looked off in the metrics, the team had already been drifting for a while. The breakdown started in trust and attention long before it showed up in delivery.
I Didn't See It At First
I have to admit something I didn't see clearly when we first gave the team access to these tools. I thought the risk was that engineers wouldn't use AI enough. That they would resist change and stick to old patterns out of fear or habit. I spent energy on encouragement and permission when I should have been paying attention to what happened when the tools became invisible.
The junior engineer in my conference room wasn't resisting AI. He was using it constantly. That was the problem. He had become fluent in describing problems to a machine but less fluent in solving them himself. The judgment that comes from wrestling with hard problems was atrophying because he wasn't being forced to wrestle anymore.
He had become fluent in describing problems to a machine but less fluent in solving them himself.
I saw it in myself too if I'm honest. There were moments where I prompted my way through something I should have understood more deeply, where I accepted working output without accepting the understanding that should have come with it. The convenience was seductive. The cost was invisible until it wasn't.
The Exponential Nature of the Problem
A leader gave me the cleanest explanation I've heard of what's actually happening. AI isn't a multiplier. It's an exponent.
A multiplier makes people think the tool adds a fixed amount of value. An exponent is different. It magnifies whatever is already there. If a team has clear standards, strong review habits, shared judgment, and disciplined engineering patterns, AI makes those things more powerful. If a team is loose, inconsistent, and already carrying weak habits, AI doesn't smooth that out. It amplifies the instability.
That's why two teams can buy the same tools under the same pressure and end up in completely different places. The tool didn't create the difference. The foundation did.
I watched this play out in real time. Teams that had strong standards before AI arrived were producing more code at the same quality bar. Teams that were already inconsistent became inconsistent faster. The junk drawer codebases became junk drawer codebases with more volume. The engineers who were already struggling didn't get rescued by the tool. They got buried by it.
AI isn't a multiplier. It's an exponent.
What the Dashboard Can't See
Your AI adoption dashboard is probably showing you participation metrics right now. How many engineers have access. How many prompts are being sent. Usage rates and feature adoption and all the numbers that make leadership feel like the rollout is succeeding.
Here's what those numbers can't tell you: whether your engineers understand the code they're shipping. Whether the judgment that used to be built through friction is still being built at all. Whether your team's accumulated wisdom is growing or eroding while everyone moves faster.
The junior engineer who couldn't explain his own function wasn't an outlier. He was a canary. The kind of signal that shows up in behavior and conversation long before it shows up in velocity charts or defect rates.
The Question That Matters
Ask your engineers what they wish had been different about how the tools were introduced. Don't defend the rollout. Just listen.
What they tell you is what your dashboard can't see. The moments where they accepted generated code without understanding it. The times they shipped something that worked but couldn't explain why. The creeping sense that they were getting faster at the wrong things.
Some of them will tell you they noticed the drift in themselves and pulled back, forced themselves to slow down and understand. Others will admit they haven't found the discipline yet. They're still riding the speed wave, hoping the understanding will come later, knowing somewhere underneath that it probably won't.
What you hear will tell you whether your AI adoption is actually succeeding or just moving faster toward a future where fewer people on your team actually know how the system works.
The code compiles. The tests pass. The dashboard shows green.
But somewhere in your organization right now, a junior engineer is realizing he can't explain his own code. He's exhausted and a little bit lost. And he's waiting to see if anyone notices.
---
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. [Subscribe for free](https://www.jonoherrington.com/newsletter).
Top comments (22)
AI is removing the productive friction that used to build engineering judgment.
We’re getting faster at producing code, but slower at building understanding.
What worries me more is the downstream effect — review quality drops, architectural consistency weakens, and senior engineers absorb the cognitive load. Over time, teams may look productive on dashboards while actual engineering depth quietly erodes.
AI truly behaves like an exponent: strong engineering culture gets stronger, weak foundations deteriorate faster. The real challenge isn’t whether we use AI — it’s whether we deliberately preserve the learning friction that builds reasoning, debugging skills, and ownership.
You’re pointing at the part leaders will miss first.
The dashboard still looks healthy while the system underneath starts drifting. Review quality doesn’t fall off a cliff… it thins out slowly. Fewer questions. Less pushback. More acceptance of “good enough.”
And you’re right on the exponent.
AI doesn’t create the culture. It amplifies whatever was already there.
The uncomfortable part politically… most teams didn’t invest in strong foundations before they scaled generation. So the acceleration is exposing that gap faster than leadership can react.
🧠 ClawMind Take
AI doesn’t just write code — it erases decision context.
⸻
Core Problem
• Code works ✅
• But why it was built this way is lost ❌
This leads to:
• inconsistent logic
• broken assumptions
• untraceable decisions
⸻
What Actually Matters
Preserve decision points at the moment they happen.
Not after. Not in hindsight.
⸻
ClawMind Approach
• Capture:
• decision
• reasoning
• assumptions
• Store as:
• replayable artifacts
• linked to code / task
⸻
Why Tools + Governance Matter
Without structure:
AI → fast output → logic drift → system inconsistency
With ClawMind:
AI → decision captured → audited → consistent system evolution
⸻
One-line
AI generates code.
ClawMind preserves the decisions behind it.
The gap you’re describing is real.
Code ships. Context doesn’t.
Where I’d push a bit… tools don’t solve this by default. They just formalize whatever discipline already exists. If the team isn’t already stopping to think through decisions, capturing them becomes another checkbox instead of signal.
The harder part is behavioral.
Getting engineers to pause long enough to have a decision worth capturing in the first place.
Without that, you just get better storage of shallow thinking.
Then maybe the real issue isn’t storage at all.
It’s whether anything worth storing ever happened.
Are we just extracting answers from AI,
or actually learning from them?
The canary metaphor is the right one. The junior engineer who can't explain his own code isn't failing, he's signaling. The dashboard won't catch it because the code works. The tests pass. The only evidence is a feeling he's not sure he's allowed to name. Most organizations will miss this until the feeling becomes a fire. The ones that notice early will be the ones who asked the question before the metrics turned red. Not because they're smarter. Because they stopped assuming that working code and understood code are the same thing.
That’s exactly the signal. By the time the metrics catch it, the habit is already baked in. Leaders who notice early are usually the ones still close enough to the work to hear the hesitation before they see the failure.
been catching myself doing this too. accepting AI output without actually understanding why it works.
but i think the skill is shifting not disappearing. the devs who win arent the ones who memorize syntax. its the ones who know when something smells wrong. thats harder than writing it from scratch honestly.
every time weve made a tool cheaper the number of people using it went up not down. spreadsheets didnt kill accountants. cloud didnt kill ops. i think AI coding is the same pattern. the danger isnt the tool. its if we stop bothering to understand what its doing.
I agree with that. The danger isn’t AI existing in the workflow. It’s losing the ability to tell when the output is wrong, fragile, or just poorly reasoned. That judgment is what keeps the tool useful instead of corrosive.
I’ve seen this too. Code works, but when you ask “why,” the answer isn’t there. That’s a different kind of risk.
Exactly. A missing “why” is a real risk because understanding is what lets a team debug, extend, and trust what got shipped.
This resonates with something we see in the market data every week. The dependency problem is not just cognitive it is economic. As AI makes coding easier the inference bill for staying sharp compounds quietly in the background and most developers have no visibility into what that actually costs at scale. The teams that maintain genuine coding ability alongside AI assistance are not just protecting their craft they are protecting their ability to audit what the AI is doing and catch the moments when it confidently generates something plausible but wrong. That judgment layer does not get cheaper when the tools get better and it probably gets more valuable. The forgetting risk is real but so is the cost of having no human in the loop who actually understands the code being shipped.
Strong point. The more teams rely on generated output, the more valuable real human audit ability becomes. That layer does not get cheaper just because generation does.
The exponent framing is the most useful mental model I've seen for this. I run an automation consultancy and the pattern shows up even outside traditional engineering teams — business users building workflows with AI assistance hit the same wall. They create something that works, can't explain why it works, and when it breaks they have zero debugging capability.
What I've found helps: force an "explain what this does" step before shipping. Not in a code review — earlier. When the engineer finishes a task, they write a 2-3 sentence plain-language explanation of the logic. If they can't, that's the signal.
The other pattern I've noticed: AI-assisted work creates a false ceiling on difficulty. The junior dev's problem wasn't that they couldn't do hard things — it's that AI eliminated the mid-difficulty range where skills actually develop. They went from trivial tasks to AI-generated solutions for hard tasks, with no time spent in the uncomfortable middle where you're stuck for 30 minutes and then have the breakthrough that builds real understanding.
That uncomfortable middle is where expertise actually lives. Preserving it deliberately is a leadership problem, not a tooling problem.
That “explain it before you ship it” step is strong.
It forces the moment most teams are skipping right now… translating output into understanding.
What you’re seeing with the missing middle is exactly it.
AI collapses the gradient where people used to struggle just enough to learn.
And that’s the leadership tension.
If you optimize purely for output, that middle disappears completely. If you protect it, you take a short-term hit that most orgs aren’t willing to take.
That’s the political tradeoff no one wants to say out loud.
This is the question I've been avoiding.
I can ship faster, but can I still code without AI? that's it. That's the fear.
Last week, I needed to write a simple loop. I froze. Not because it was hard. Because I realized I hadn't written one from scratch in months. AI had been doing it for me.
The skill fades silently. You don't notice until you need it and it's gone.
Thanks for naming this. Feels less lonely knowing others feel it too. 🙌
That’s the part people rarely admit out loud. Skill loss does not usually feel dramatic while it’s happening. It shows up later in a quiet moment when something basic suddenly feels unfamiliar.
The friction mechanism is the part worth naming more precisely. Bad Stack Overflow answers forced skepticism accidentally — you got burned, you learned to verify. AI removes that forcing function. It's patient, confident, never annoyed when you ask the same question twice. The junior who can't explain his own function isn't lazy. He just never got burned. There was no moment where the confident answer failed publicly and made him rebuild the understanding from scratch.
The exponent framing is right, but it has a second-order effect your piece points at without fully naming: the engineers who notice the drift and pull back are self-selecting into a different capability tier. The ones who don't notice — or notice and don't stop — are compounding in the other direction. The gap between those two groups widens faster than any manager's dashboard will show.
The interview question that surfaces this: not "build a todo app" but "here's 200 lines of AI-generated code, tests pass, it's in production, something is wrong — find it." That's the new entry test. Can they interrogate confident output? Can they find the failure the model introduced while optimizing for elegance? That's the skill the friction used to build accidentally. Now someone has to build it deliberately.
This is sharp.
Getting burned used to be part of the system. Now it’s optional.
So the engineers who manufacture their own friction are going to separate fast from the ones who don’t. And like you said… that split won’t show up cleanly in any metric leadership is watching.
That interview shift is the right direction too.
Not whether they can produce code.
Whether they can challenge something that already “works.”
That’s the skill that used to develop by accident. Now it has to be designed into how we hire and how we work.
AI create not only technical debt, but cognitive debt as well.
Yes. And cognitive debt is harder to see because it does not show up in the repo right away. It shows up in weaker reasoning, thinner reviews, and slower recovery when something breaks.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.