When Your Whole Team Ships Fast and Understands Nothing
I wrote before about the “AI hangover”—that moment when you realize you don’t fully understand code you shipped last week. It’s uncomfortable, but manageable when it’s just you.
Now multiply that by five engineers.
The Speed Trap at Scale
Here’s what happened on a team I worked with. Everyone adopted Claude Code around the same time. Productivity went through the roof. Features that used to take a week shipped in two days. The backlog started shrinking for the first time in months.
And then things got weird.
Someone would ask a question in Slack about how a feature worked. Silence. The person who built it would respond: “Let me check, I don’t remember exactly how I did that.” They’d written it three weeks ago.
This wasn’t laziness. It was the natural result of moving faster than you can internalize.
Who Owns This Code?
Ownership gets fuzzy when AI writes most of it.
Before Claude Code, if you built a feature, you knew that feature. You’d debugged it at 2am. You remembered why that weird retry logic existed. The code lived in your head.
Now? You described what you wanted, reviewed what came out, and shipped it. You understand it well enough to approve it. But “well enough to approve” isn’t the same as “well enough to debug at 2am three months later.”
When something breaks, there’s this awkward moment: “Didn’t you build this?” “I mean… sort of?”
The PR Review Problem
PRs started piling up. Not because people were slow—because there was just so much code.
When everyone can generate code faster, everyone does. The review queue grows. And here’s the uncomfortable truth: when you’re reviewing your third 500-line PR of the day, you’re not really reading it anymore. You’re skimming. You’re trusting the tests. You’re approving because it looks reasonable.
I caught myself doing this. “Tests pass, types check, LGTM.” That was my review. For code that would run in production for years.
Knowledge Silos Get Worse
You’d think AI would reduce knowledge silos. Everyone has the same tool, same capabilities, right?
Actually the opposite happened.
People stopped reading each other’s code. Why would you? There’s so much of it now, and you’ve got your own features to ship. The implicit knowledge transfer that used to happen through PR reviews—understanding how your teammate solved a similar problem—that slowed down.
Each person became an expert in their own AI-generated corners of the codebase. The shared mental model of the system started fragmenting.
The “It Works” Trap
“It works” became the standard for shipping.
Not “I understand why it works.” Not “I could explain this to a new hire.” Just: the tests pass, the feature does what we asked, ship it.
This is fine for a while. Until it isn’t.
The first real incident was a wake-up call. Three engineers looked at the code. None of them had written it—or rather, none of them had really written it. Everyone was equally confused. Debugging took four times longer than it should have because nobody had the context in their head.
Now What?
I don’t have clean answers here. We’re still figuring it out. But here’s what we’re thinking about.
The obvious response is “slow down”—but that feels like giving up the thing that makes these tools valuable. Nobody wants to go back to writing everything by hand just to preserve understanding.
Maybe it’s about changing how we review. What if PR reviews weren’t just “does this work” but “do I understand this well enough to debug it later”? That’s a higher bar, and it might mean smaller PRs even when Claude Code could generate bigger ones.
Maybe it’s about rethinking ownership. If nobody really wrote the code, maybe ownership needs to be more explicit. Assigned, not assumed. Someone’s name on it who commits to understanding it deeply, even if AI did the typing.
Maybe it’s about forcing knowledge transfer differently. Pulling someone into debugging who didn’t write the code. Painful, but it spreads context around instead of letting it concentrate.
Or maybe it’s accepting that some level of understanding loss is just the cost of moving faster—and being intentional about where we’re willing to pay that cost and where we’re not.
We’re experimenting. Nothing’s solved yet. But ignoring it definitely isn’t working.