We Let Claude Write Our Database Migrations — Here's What It Silently Deleted
← Back
March 26, 2026AI10 min read

We Let Claude Write Our Database Migrations — Here's What It Silently Deleted

Published March 26, 202610 min read

The support tickets started at 11:23 AM. "My invoice has no shipping address." Then another. Then twelve more in the same hour. By the time we traced it to the migration that had run at 9:47 AM, 3,200 invoices had already been generated with blank address fields and dispatched to customers. The columns were gone. Postgres does not warn you when you drop a column that something else is reading. It just executes and returns OK in 180 milliseconds.

The Workflow That Felt Safe

We had been using Claude inside Cursor to write Postgres migration files for about three months. For additive changes — adding columns, creating indexes, new tables — it was excellent. It knew our naming conventions, wrote clean ALTER TABLE statements, and always included rollback steps. We had reviewed maybe forty migrations this way without incident. We got comfortable.

On a Wednesday morning, a senior engineer opened Cursor and typed: "Clean up the users table. The address columns are deprecated — remove address_line1, address_line2, city, state, zip and replace with a single address_json JSONB column."

Claude wrote the migration in seconds. It looked clean. It looked correct. It was exactly what was asked.

sql — the migration Claude wrote
-- Migration: consolidate address fields to JSONB
-- Generated: 2026-03-19 09:31 UTC

BEGIN;

-- Add new consolidated column
ALTER TABLE users ADD COLUMN address_json JSONB;

-- Migrate existing data
UPDATE users SET address_json = jsonb_build_object(
  'line1', address_line1,
  'line2', address_line2,
  'city', city,
  'state', state,
  'zip', zip
) WHERE address_line1 IS NOT NULL;

-- Drop deprecated columns
ALTER TABLE users DROP COLUMN address_line1;
ALTER TABLE users DROP COLUMN address_line2;
ALTER TABLE users DROP COLUMN city;
ALTER TABLE users DROP COLUMN state;
ALTER TABLE users DROP COLUMN zip;

COMMIT;

-- Rollback:
-- ALTER TABLE users ADD COLUMN address_line1 TEXT; ... (etc)

The engineer reviewed it. The data migration looked right. The rollback was included. It ran in staging without errors. They deployed to production at 9:47 AM.

What Claude Did Not Know

Claude had access to one codebase: the main application. In that codebase, the five address columns were indeed deprecated. The main app had been migrated to address_json six months earlier. No active queries in the main repo touched those columns. Claude was correct about everything it could see.

What Claude could not see was the billing service.

  WHAT CLAUDE COULD SEE        WHAT CLAUDE COULD NOT SEE
  ──────────────────────────   ────────────────────────────────────

  main-app/ (repo 1) ✅         billing-service/ (repo 2) ❌
  ├── models/user.ts            ├── invoices/address-resolver.ts
  │   address_json ← active     │   SELECT address_line1,          ← still reading
  │   address_line1 ← unused    │          city, state, zip
  │   city ← unused             │     FROM users
  │   state ← unused            │    WHERE id = $1
  │   zip ← unused
  └── (no queries to old cols)  billing-service was on v1 of the
                                user schema. Migration was planned
  Claude verdict: safe to drop  but never shipped. Nobody knew.

The billing service was a separate Node.js application in a separate GitHub repository, deployed independently, maintained by a different team. It had been on the roadmap for schema migration for four months. The ticket existed. The PR had never been opened.

Every time the billing service generated an invoice after 9:47 AM, it ran SELECT address_line1, city, state, zip FROM users WHERE id = $1 and got back null for every field. No error. No exception. Just nulls, formatted into a blank address block on the invoice PDF.

The Damage

Between 9:47 AM and 11:31 AM — one hour and 44 minutes — 3,200 invoices were generated and sent. All of them had blank shipping addresses. The billing service ran on a schedule every 6 minutes, so we caught it at the second support ticket spike before the next batch fired.

We killed the billing service immediately. Then we stared at the recovery options.

The rollback in the migration was useless. DROP COLUMN does not preserve data. The column and its contents were gone. Our only options were:

  1. Restore from the previous night's backup and replay 2.5 hours of writes manually
  2. Re-extract the address fields from address_json (which we had luckily migrated before dropping)

Option 2 saved us. Because the migration had correctly copied data into address_json before dropping the columns, we could reconstruct the old fields. We added them back, populated them from the JSONB column, re-ran the billing service against the affected time window. Total incident duration: 14 hours.

sql — the recovery query
-- Emergency: restore address columns from address_json backup
ALTER TABLE users
  ADD COLUMN address_line1 TEXT,
  ADD COLUMN address_line2 TEXT,
  ADD COLUMN city TEXT,
  ADD COLUMN state TEXT,
  ADD COLUMN zip TEXT;

UPDATE users SET
  address_line1 = address_json->>'line1',
  address_line2 = address_json->>'line2',
  city          = address_json->>'city',
  state         = address_json->>'state',
  zip           = address_json->>'zip'
WHERE address_json IS NOT NULL;

-- Re-run billing service for affected window
-- SELECT id FROM invoices WHERE created_at BETWEEN '2026-03-19 09:47' AND '2026-03-19 11:31'
--   AND shipping_address IS NULL;

The Rule We Now Follow: Two-Phase Deprecation

We never drop a column in a single migration anymore. Ever. Regardless of how many tools or humans have reviewed it.

The pattern is now mandatory:

sql — phase 1: stop writing (deploy and wait 2+ weeks)
-- Phase 1: Remove all writes to the old columns in application code.
-- Do NOT drop yet. Let the column sit idle.
-- Monitor query logs to confirm no reads are occurring.

-- Optional: add a trigger to catch any surprise reads
CREATE OR REPLACE FUNCTION warn_on_legacy_address_read()
RETURNS TRIGGER LANGUAGE plpgsql AS $$
BEGIN
  RAISE WARNING 'Legacy column read detected — column marked for deprecation';
  RETURN OLD;
END;
$$;
sql — phase 2: drop only after confirmed zero reads
-- Phase 2: Only after 2+ weeks of zero reads in pg_stat_statements
-- and explicit sign-off from ALL service owners.

-- Verify first:
SELECT query, calls, last_exec_time
FROM pg_stat_statements
WHERE query ILIKE '%address_line1%'
  AND last_exec_time > NOW() - INTERVAL '14 days';

-- If zero rows: safe to drop
ALTER TABLE users
  DROP COLUMN address_line1,
  DROP COLUMN address_line2,
  DROP COLUMN city,
  DROP COLUMN state,
  DROP COLUMN zip;

Lessons

1. AI coding assistants have exactly as much context as you give them. Claude saw one repo. The dependency lived in another. It was not a hallucination or a mistake — it was a correct answer to an incomplete question. The incompleteness was ours.

2. DROP COLUMN is irreversible. Treat it like a production incident waiting to happen. Any migration that drops a column should require explicit sign-off from every team that owns a service touching that database — not just the team running the migration. We now have a Slack checklist template for this.

3. "Deprecated in our codebase" is not the same as "unused." A column can be unused in your main app and critical in three services you have not looked at. Before dropping anything, query pg_stat_statements, grep every repository you own, and ask the question out loud: who else might be reading this?

Share this
← All Posts10 min read