The "I Built This With AI" Hangover
I had a side project break last month. Nothing major—an API endpoint started returning empty responses. Should’ve been a 10-minute fix.
It took me three hours.
The problem wasn’t the bug. The problem was that I’d built this feature with heavy AI assistance back in October, shipped it because it worked, and then never thought about it again. When I opened the file, I genuinely didn’t recognize half the code. There was a decorator I’d never seen before. Some caching logic I didn’t remember asking for. A helper function that seemed to do something important but I couldn’t tell you what.
It worked when I shipped it. That was the whole problem.
The Speed/Understanding Tradeoff
Here’s what happens when you build with AI assistance: you skip the part where you actually learn what you’re building.
Normally, when you write code, you’re making decisions constantly. Should I use a class or a function? What should I name this? How should I handle this edge case? Each decision forces you to understand the problem a little better.
With AI, you describe what you want and code appears. If it works, you move on. The decisions still get made—you just weren’t there for them.
This is fine for throwaway scripts. It’s a problem for anything you’ll need to maintain.
Symptoms You Might Recognize
You’re in this situation if:
- You open a file and your first thought is “who wrote this?”
- There are import statements for libraries you’ve never heard of
- The code uses patterns you don’t normally use
- You can’t explain why it works, just that it does
- Debugging feels like reverse-engineering someone else’s code
That last one is the killer. Debugging requires a mental model of how the code should work. If you never built that model, you’re starting from scratch every time something breaks.
Things I Do Differently Now
I still use AI for everything. I’m not going back to writing boilerplate by hand. But I’ve added some friction to the process.
I read before I commit. Not just “does it work?” but “do I understand this?” If I can’t explain what a function does, I either ask the AI to explain it or I rewrite it in a way that makes sense to me.
I question the dependencies. AI loves pulling in libraries. Half the time there’s a good reason, half the time it’s overkill. When I see an import I don’t recognize, I ask: do I need this? Could I do this with stdlib? If I keep it, I at least skim the docs so I know what I’m shipping.
I refactor to my style. AI writes code in a generic, tutorial-esque way. Lots of comments, verbose variable names, patterns from Stack Overflow’s greatest hits. I take a few minutes to make it look like code I would’ve written. Future me will thank present me.
The 2am test. Before shipping, I ask myself: if this breaks at 2am, can I debug it? If the answer is “probably not,” I spend more time understanding it now while the context is fresh.
Already In The Mess?
If you’re staring at AI-generated code you don’t understand and something’s broken, here’s what helps:
First, resist the urge to ask AI to fix it. That’s how you got here. You’ll end up with more code you don’t understand patched on top of code you already don’t understand.
Instead, start from the inputs and outputs. What goes into this function? What comes out? Add some print statements or breakpoints. Trace the actual data flow instead of trying to read the code and guess.
If there’s a mystery library, go read its docs. Not a tutorial—the actual API documentation. You’ll probably discover it does three things and you’re using one of them.
And honestly, sometimes the right call is to rewrite it. If the code is short enough and you can’t figure out what it does, just delete it and rebuild with more oversight. You’ll write it faster this time, and you’ll actually understand it.
The Real Lesson
AI didn’t cause this problem. I did. I got lazy about the “understanding” part of building software because the “working” part came so fast.
The tools are incredible. I build things in hours that would’ve taken days. But building fast doesn’t mean I get to skip the part where I own what I build. The code lives in my repo. When it breaks, it’s my problem.
Now I treat AI-assisted code the same way I’d treat code from a contractor: review it properly before accepting it, make sure I understand the important parts, and don’t be surprised when it needs maintenance.
The hangover is real. But it’s preventable.