Database migrations are the most dangerous part of production deployments. Schema changes that look safe in development cause downtime in production when locks contend with live traffic, when index builds block reads, or when both old and new code run simultaneously during a rolling deploy. Claude Code generates migrations that are correct — and safe for the traffic patterns your application actually has.
Safe Migration Patterns
We need to add a non-nullable column to a users table with 2 million rows.
Do it without locking the table for the 45 minutes it takes to backfill.
The naive approach locks the table:
-- DANGEROUS on large tables — holds ACCESS EXCLUSIVE lock during backfill
ALTER TABLE users ADD COLUMN metadata JSONB NOT NULL DEFAULT '{}';
Zero-downtime approach (3 deployments):
-- Migration 1: Add nullable column — instant, no lock
ALTER TABLE users ADD COLUMN metadata JSONB;
// Deploy v2 of application: writes to both old fields + new metadata field
// Reads from old field (new field may be null for old rows)
-- Migration 2: Backfill existing rows in batches (runs async, no table lock)
DO $$
DECLARE
batch_size INT := 1000;
last_id BIGINT := 0;
max_id BIGINT;
BEGIN
SELECT MAX(id) INTO max_id FROM users;
WHILE last_id < max_id LOOP
UPDATE users
SET metadata = '{}'::jsonb
WHERE id > last_id
AND id <= last_id + batch_size
AND metadata IS NULL;
last_id := last_id + batch_size;
PERFORM pg_sleep(0.01); -- Brief pause to avoid overwhelming the DB
END LOOP;
END
$$;
-- Migration 3: Add NOT NULL constraint after backfill (fast — validates existing data)
ALTER TABLE users ALTER COLUMN metadata SET NOT NULL;
ALTER TABLE users ALTER COLUMN metadata SET DEFAULT '{}';
Adding Indexes Without Blocking
Add an index to orders.customer_id. The table has 50M rows.
We can't take downtime.
-- WRONG: Locks the table during index build (minutes on large tables)
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
-- RIGHT: CREATE INDEX CONCURRENTLY — builds without holding lock
-- Takes 3-5x longer but doesn't block reads/writes
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);
-- IMPORTANT: CONCURRENTLY can fail — if it does, a INVALID index remains
-- Clean up invalid indexes before retrying:
SELECT indexname FROM pg_indexes
JOIN pg_class ON relname = tablename
WHERE pg_index.indisvalid = false;
-- DROP INDEX CONCURRENTLY idx_orders_customer_id; -- Then retry
Renaming Columns Safely
Rename users.user_name to users.username (remove underscore).
Old code and new code run simultaneously during deployment.
This is a 4-step migration over 4 deployments:
-- Step 1: Add new column
ALTER TABLE users ADD COLUMN username VARCHAR(50);
-- Step 2 (in app code): Write to both columns, read from old
-- Step 3: Backfill new column from old
UPDATE users SET username = user_name WHERE username IS NULL;
-- Step 4 (in app code): Write to both, read from new
-- Step 5: Drop old column
ALTER TABLE users DROP COLUMN user_name;
Flyway Configuration
Set up Flyway for our Node.js + PostgreSQL project.
Naming conventions for versioned and repeatable migrations.
db/migrations/
├── V1__initial_schema.sql
├── V2__add_users_table.sql
├── V3__add_orders_table.sql
├── V4__add_customer_id_index.sql ← CONCURRENTLY
├── V4.1__backfill_metadata.sql ← Batched backfill
├── V5__metadata_not_null.sql
└── R__refresh_search_vectors.sql ← Repeatable (R prefix)
-- V3__add_orders_table.sql
-- Checksum ensures this file is never modified after deployment
-- If you need to change it, create a new migration
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
customer_id UUID NOT NULL REFERENCES users(id),
status VARCHAR(20) NOT NULL DEFAULT 'pending'
CHECK (status IN ('pending', 'processing', 'shipped', 'delivered', 'cancelled')),
total_cents INTEGER NOT NULL CHECK (total_cents > 0),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Always add indexes in the same migration as the FK
-- (No CONCURRENTLY in migration files — Flyway wraps in transaction)
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
CREATE INDEX idx_orders_status ON orders (status) WHERE status != 'delivered';
CREATE INDEX idx_orders_created_at ON orders (created_at DESC);
# flyway.conf
flyway.url=jdbc:postgresql://localhost:5432/mydb
flyway.user=${DB_USER}
flyway.password=${DB_PASSWORD}
flyway.locations=filesystem:db/migrations
flyway.validateOnMigrate=true
flyway.outOfOrder=false # Strict ordering
flyway.baselineOnMigrate=false # Only for existing databases
// src/db/migrate.ts — run migrations on startup
import { execSync } from 'child_process';
export function runMigrations() {
try {
execSync('npx flyway -configFiles=flyway.conf migrate', {
stdio: 'inherit',
env: {
...process.env,
DB_USER: process.env.DATABASE_USER,
DB_PASSWORD: process.env.DATABASE_PASSWORD,
},
});
} catch (error) {
console.error('Migration failed:', error);
process.exit(1); // Don't start app if migrations fail
}
}
Data Migrations with Drizzle
We're migrating from storing prices as dollars (float) to cents (integer).
Half our rows are already migrated. Do it safely without data loss.
// drizzle/migrations/0012_price_cents.ts
import { sql } from 'drizzle-orm';
import { db } from '../db';
export async function up(db: Database) {
// Add new column
await db.execute(sql`ALTER TABLE products ADD COLUMN price_cents INTEGER`);
// Migrate data — multiply by 100 and round to avoid float precision issues
await db.execute(sql`
UPDATE products
SET price_cents = ROUND(price_dollars * 100)::INTEGER
WHERE price_dollars IS NOT NULL
`);
// Validate no NULL values before adding constraint
const { rows } = await db.execute(sql`
SELECT COUNT(*) as missing FROM products WHERE price_cents IS NULL
`);
if (Number(rows[0].missing) > 0) {
throw new Error(`Migration failed: ${rows[0].missing} rows have NULL price_cents`);
}
// Now safe to make it NOT NULL
await db.execute(sql`ALTER TABLE products ALTER COLUMN price_cents SET NOT NULL`);
}
export async function down(db: Database) {
// Rollback: convert cents back to dollars
await db.execute(sql`
ALTER TABLE products ADD COLUMN IF NOT EXISTS price_dollars NUMERIC(10,2)
`);
await db.execute(sql`
UPDATE products SET price_dollars = price_cents::NUMERIC / 100
WHERE price_cents IS NOT NULL
`);
await db.execute(sql`ALTER TABLE products DROP COLUMN price_cents`);
}
CLAUDE.md for Migration Safety
## Database Migration Rules
- Never modify a migration file after it's been deployed — create a new one
- Large table changes: multi-step (add nullable → backfill → add constraint)
- Indexes on large tables: CREATE INDEX CONCURRENTLY in a separate migration (outside transaction)
- Data migrations: validate row count matches before NOT NULL constraint
- Backward-compatible changes only during rolling deploys:
- new columns: nullable first
- removed columns: stop writing/reading first, then drop after full deploy
- renamed columns: expand-contract pattern (add new → dual-write → backfill → switch reads → drop old)
- Test migrations on staging with production-sized data before deploying
For the ORM-level query patterns these schemas support, see the database and SQL guide. For the observability that shows you migration performance in production, see the OpenTelemetry guide. The Claude Skills 360 bundle includes database migration skill sets for zero-downtime changes, data migrations, and rollback strategies. Start with the free tier to try safe migration generation.