← Back to posts

Using Claude Code for Database Work

claude-codepostgresqldatabaselessons-learned

I’ve been using Claude Code heavily for API work—endpoints, business logic, data transformations. That part clicked pretty quickly. But database work felt riskier to hand off. Migrations are destructive. Bad SQL on a hot table can take down your whole app. I held back for a while.

Eventually I started leaning into it, and honestly it’s been one of the bigger productivity wins. But it took some adjustment to figure out where Claude actually helps and where it’ll burn you.

Where It Actually Shines

Writing migration scripts is the obvious one. Not the logic of what to migrate—that’s still on me—but the boilerplate. Creating a table with the right constraints, adding a foreign key with the correct ON DELETE behavior, writing the rollback. That stuff takes longer than it should when you’re doing it by hand. Claude gets it right almost every time.

Schema design sessions have been surprisingly useful too. I’ll describe what I’m building, drop in some context about the existing data model, and have a back-and-forth about the tradeoffs. Not “design this for me” but more “I’m thinking about this shape, what am I missing?” It’s caught things—missing indexes, ambiguous nullability, cases where I was about to model a one-to-many as a many-to-many for no good reason.

Complex queries are another one. Anything with CTEs, window functions, or multiple levels of aggregation—I used to spend 20 minutes getting the structure right before I could even think about the actual logic. Now I describe what I need and clean up what comes back. The time I save there is real.

The Part That’ll Get You

Here’s the thing Claude doesn’t know: your data.

It doesn’t know that one of your tables has 80 million rows. It doesn’t know that your users table has a wildly non-uniform distribution because of how you onboarded early customers. It doesn’t know that the index it’s not using exists, or that the one it is using is barely used in practice.

I learned this when I asked for help optimizing a slow query. Claude rewrote it—cleaner, more elegant, probably correct on a fresh database. I ran EXPLAIN ANALYZE on it and it was doing a full sequential scan on a table I knew had a relevant index. The rewrite changed the query shape just enough that Postgres stopped using the index.

Now I paste EXPLAIN ANALYZE output in with every query optimization request. It’s not optional. Without it, Claude is guessing at the physical reality of your database, and it’ll guess confidently.

What I Always Include Now

When I’m asking about performance or query work, I give Claude:

  • The relevant table sizes (rough row counts)
  • The indexes that exist on those tables
  • The EXPLAIN ANALYZE output from the current slow query
  • Any constraints on locking (tables that can’t afford a full lock during a migration, etc.)

That context changes the quality of the output dramatically. Without it, you get theoretically correct SQL. With it, you get SQL that accounts for the fact that your system is actually running in production.

For migrations specifically, I always ask about the rollback before I ask about the migration itself. Partly because it forces Claude to think about reversibility, and partly because I’ve been in enough incidents to know that the rollback is the part you need most when you’re stressed.

The Backfill Problem

One thing I keep running into: Claude will write a backfill migration that works correctly but would lock a table for ten minutes on a real dataset. It’s not wrong—it’ll do what you asked—it just doesn’t know you have a million rows in that table and can’t afford a lock.

The fix is straightforward: tell it the table size and ask it to batch the backfill. But you have to know to ask. If you don’t mention it, you’ll get a migration that’s technically fine and operationally a problem.

Things I Still Do Myself

I don’t let Claude make decisions about when to add an index. It’ll suggest one if I ask, and I’ll think about it, but the final call stays with me. Indexes aren’t free—they slow down writes and take up space—and that tradeoff depends on read/write patterns Claude can’t see.

I also don’t let it touch anything related to partitioning or replication. That stuff is consequential enough that I want to reason through it manually, or at least do a lot more validation before I trust the output.

And any migration that drops data—even temporarily—I review at least twice. Claude’s pretty careful about this in my experience, but it’s the kind of thing where being careful twice is worth it.

The Workflow That Works For Me

Describe what I need in plain terms. Ask for the migration first, then ask for the rollback separately. Run it against a copy of production data before it goes anywhere near production. If it’s a slow query, paste in the EXPLAIN output and the schema and ask again.

That’s basically it. It’s not complicated, but the instinct to just take the first output and run with it is real, and it’s the instinct that’ll cost you.

The database work I was most hesitant to hand off is now some of the work I’m most comfortable getting help with—as long as I bring enough context to the conversation. Claude doesn’t know your data. That’s your job.