Claude Code for Database Work: SQL, Migrations, and Query Optimization — Claude Skills 360 Blog
Blog / Development / Claude Code for Database Work: SQL, Migrations, and Query Optimization
Development

Claude Code for Database Work: SQL, Migrations, and Query Optimization

Published: May 6, 2026
Read time: 9 min read
By: Claude Skills 360

Database work has an unusually high cost of mistakes — a slow query can take down production, a bad migration can corrupt data, a missing index can make your app feel broken. Claude Code handles database tasks well because it reads schema context before suggesting anything, rather than generating generic SQL that might be wrong for your specific tables.

This guide covers practical database workflows: writing complex queries, designing schemas, optimizing slow queries, and safe migration patterns.

Setting Up Claude Code for Database Work

Context is everything for database tasks. Put your schema in CLAUDE.md or in a dedicated schema.md file that Claude reads:

## Database: PostgreSQL 16

### Key tables and relationships
- users (id, email, created_at, org_id FK→orgs)
- orgs (id, name, plan, created_at)  
- orders (id, user_id FK→users, total_cents, status, created_at)
- order_items (id, order_id FK→orders, product_id, quantity, price_cents)
- products (id, name, sku, price_cents, inventory_count, deleted_at)

### Indexes (non-obvious ones)
- orders: idx on (user_id, status) — common query pattern
- products: partial index on (sku) WHERE deleted_at IS NULL

### Conventions
- Soft deletes: deleted_at timestamp (null = active)
- All monetary values in cents (integer), never float
- Multi-tenant: most queries must filter by org_id
- Timestamps: UTC, type timestamptz

### Do Not
- Never use float/numeric for money — integer cents only
- Never query across tenant boundaries
- Never add a NOT NULL column without a default to existing tables (lock risk)

See the CLAUDE.md setup guide for structuring this. With schema context in place, Claude generates SQL that matches your specific column names, understands your relationships, and respects your constraints.

Writing Complex SQL

Aggregations and Window Functions

Write a query showing each user's order stats:
total orders, total spend, average order value,
and their rank by total spend (1 = highest spender).
Filter out users with 0 orders.
SELECT 
    u.id,
    u.email,
    COUNT(o.id) AS order_count,
    SUM(o.total_cents) AS total_spend_cents,
    AVG(o.total_cents)::int AS avg_order_cents,
    RANK() OVER (ORDER BY SUM(o.total_cents) DESC) AS spend_rank
FROM users u
INNER JOIN orders o ON o.user_id = u.id
WHERE o.status = 'completed'
GROUP BY u.id, u.email
ORDER BY total_spend_cents DESC;

Claude correctly uses window functions (RANK() OVER) alongside aggregates, casts the float average to int (matching the cents pattern), and only includes users with completed orders via INNER JOIN.

CTEs for Complex Queries

I need a monthly revenue report for the last 12 months.
Show: month, revenue, number of orders, new customers,
and month-over-month growth percentage.
WITH monthly_stats AS (
    SELECT 
        DATE_TRUNC('month', o.created_at) AS month,
        SUM(o.total_cents) AS revenue_cents,
        COUNT(o.id) AS order_count
    FROM orders o
    WHERE o.status = 'completed'
        AND o.created_at >= NOW() - INTERVAL '12 months'
    GROUP BY DATE_TRUNC('month', o.created_at)
),
new_customers AS (
    SELECT 
        DATE_TRUNC('month', MIN(o.created_at)) AS month,
        COUNT(DISTINCT u.id) AS new_customer_count
    FROM orders o
    JOIN users u ON u.id = o.user_id
    WHERE o.status = 'completed'
    GROUP BY DATE_TRUNC('month', MIN(o.created_at))
)
SELECT 
    ms.month,
    ms.revenue_cents,
    ms.order_count,
    COALESCE(nc.new_customer_count, 0) AS new_customers,
    ROUND(
        100.0 * (ms.revenue_cents - LAG(ms.revenue_cents) OVER (ORDER BY ms.month)) 
        / NULLIF(LAG(ms.revenue_cents) OVER (ORDER BY ms.month), 0), 
        1
    ) AS mom_growth_pct
FROM monthly_stats ms
LEFT JOIN new_customers nc ON nc.month = ms.month
ORDER BY ms.month;

The NULLIF(..., 0) prevents division-by-zero on first month. LAG() for month-over-month comparison. COALESCE for months with no new customers. Claude writes the full query correctly without hand-holding.

Recursive Queries

I have a categories table with parent_id (self-reference).
Write a query that returns the full hierarchy depth for any category.
WITH RECURSIVE category_tree AS (
    -- Base case: root categories
    SELECT id, name, parent_id, name::text AS path, 0 AS depth
    FROM categories
    WHERE parent_id IS NULL

    UNION ALL

    -- Recursive case: children
    SELECT c.id, c.name, c.parent_id, 
           ct.path || ' > ' || c.name, 
           ct.depth + 1
    FROM categories c
    INNER JOIN category_tree ct ON ct.id = c.parent_id
)
SELECT id, name, path, depth
FROM category_tree
ORDER BY path;

Recursive CTEs are one of the features that most developers don’t reach for because the syntax is unfamiliar. Claude generates them correctly for tree/hierarchy problems.

Schema Design

Design a schema for a multi-tenant appointment booking system.
Tenants: businesses (salons, clinics, etc.)
Users book appointments with specific service providers.
Services have durations and prices.
A provider can have multiple availability windows per week.
Show CREATE TABLE statements.

Claude designs normalized tables with proper foreign keys, suggests indexes for query patterns (“customers will query by provider and date range”), and includes practical columns you’d otherwise forget (cancelled_at, notes, timezone). It warns when you’re designing something that will cause N+1 problems.

Naming and Type Conventions

If your schema has established patterns (like cents for money), Claude follows them:

Add a billing_amount column to an existing table.
We use integer cents for all money — not decimal, not float.

Claude adds billing_amount_cents INTEGER NOT NULL DEFAULT 0 — the correct type for your cents convention, not NUMERIC(10,2).

Query Optimization

This is where Claude Code’s ability to read your schema context pays off most. With schema in context:

This query is taking 8 seconds in production:
SELECT * FROM orders WHERE customer_email = $1 ORDER BY created_at DESC;
The orders table has 2M rows. How do I fix it?

Claude identifies: 1) no index on customer_email, 2) SELECT * includes large columns that aren’t needed. It writes:

-- Add index
CREATE INDEX CONCURRENTLY idx_orders_customer_email 
ON orders (customer_email, created_at DESC);

-- Revised query — select only needed columns
SELECT id, total_cents, status, created_at 
FROM orders 
WHERE customer_email = $1 
ORDER BY created_at DESC 
LIMIT 50;

CREATE INDEX CONCURRENTLY is the production-safe version — it builds the index without locking the table. Claude adds this automatically for production tables.

EXPLAIN ANALYZE

Here's the EXPLAIN ANALYZE output for a slow query.
Identify the bottleneck and suggest a fix.
[paste EXPLAIN output]

Claude reads the plan, identifies the expensive nodes (Seq Scan on large tables, Hash Join cost, nested loops), and explains specifically why. For most slow queries, it’s one of: missing index, bad statistics (needs ANALYZE), wrong join order, or function call preventing index use (like LOWER(email) = $1 vs a functional index).

N+1 Query Problem

My dashboard loads 50 projects, then makes a separate query per project 
to count its tasks. Fix the N+1 problem.
Here's the ORM-generated SQL: [paste]

Claude rewrites to a single query using a LEFT JOIN with COUNT in a subquery or window function, adapts to your ORM’s query builder syntax if you specify it, and explains the before/after performance difference.

Safe Database Migrations

Migrations that lock tables are the source of many production incidents. Claude Code generates migration-safe SQL:

Adding a Column

Add a metadata JSONB column to the users table. 
It should be nullable with a default of {}.
This is a production table with 500K rows.
-- Safe: nullable column with default doesn't lock long
ALTER TABLE users ADD COLUMN IF NOT EXISTS metadata JSONB DEFAULT '{}';

-- Update existing nulls if needed (do in batches if large table)
UPDATE users SET metadata = '{}' WHERE metadata IS NULL;

Claude notes when a migration might be risky and suggests the safe alternative. NOT NULL without DEFAULT on an existing table requires rewriting every row — a long lock on a 500K-row table. With DEFAULT '{} it’s near-instant.

Zero-Downtime Index Creation

I need an index on orders.email for a production table.
The table has 3M rows and can't go down.
-- CONCURRENTLY: builds index without locking table reads/writes
-- Trade-off: takes longer to build, can't run in a transaction block
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_orders_email 
ON orders (email);

Backfilling Data Safely

I added a calculated_at column to a 2M row table.
I need to backfill it from another column.
Do it without locking the table.
-- Batch update — process in chunks to avoid long transactions
DO $$
DECLARE
    batch_size INT := 10000;
    last_id BIGINT := 0;
    max_id BIGINT;
BEGIN
    SELECT MAX(id) INTO max_id FROM orders;
    
    WHILE last_id < max_id LOOP
        UPDATE orders 
        SET calculated_at = created_at + INTERVAL '1 hour'
        WHERE id > last_id AND id <= last_id + batch_size
            AND calculated_at IS NULL;
        
        last_id := last_id + batch_size;
        PERFORM pg_sleep(0.01); -- brief pause between batches
    END LOOP;
END $$;

Batch updates with a sleep between batches avoid long-running transactions that block autovacuum and replicas.

Using Claude Code for Database Debugging

Finding Slow Queries

Show me the top 10 slowest queries from pg_stat_statements.
Format as a table with query, avg ms, calls, total time.

Claude writes the pg_stat_statements query that filters out system queries and orders by mean execution time.

Lock Monitoring

My migration seems stuck. How do I check for locks?
Give me the queries to identify what's blocking what.

Claude generates the pg_locks join queries that show blocking chains — which query is holding a lock and which queries are waiting for it.

Database Work + Testing

Testing database code properly requires real database interactions — not mocks. See the testing guide for setting up test database patterns. The short version: use a test database with transactions that roll back after each test (no cleanup code needed).

The code review guide covers database-specific review patterns — missing tenant isolation, dangerous migrations, N+1 patterns in PR diffs.

For comprehensive database development with production-ready query patterns, the Claude Skills 360 bundle includes database skills for PostgreSQL, MySQL, and SQLite — query generation, migration patterns, optimization workflows, and schema design. Start with the free tier to try the core query patterns.

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free