Slow queries are among the most common production issues — and they’re usually fixable with the right index or a query rewrite. Claude Code reads EXPLAIN ANALYZE output, identifies the bottleneck (sequential scan on a million-row table, nested loop with no index), and generates the fix: the right index, a CTE rewrite, or a materialized view.
This guide covers SQL optimization with Claude Code: interpreting EXPLAIN output, index strategies, N+1 queries, and PostgreSQL-specific patterns.
Reading EXPLAIN ANALYZE
This query takes 3 seconds on a 2 million row table.
EXPLAIN ANALYZE output:
(paste output here)
Understanding what Claude Code looks for in the output:
-- The slow query
SELECT u.name, u.email, COUNT(o.id) as order_count, SUM(o.total_cents) as lifetime_value
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.created_at > '2025-01-01'
AND u.country = 'US'
GROUP BY u.id, u.name, u.email
ORDER BY lifetime_value DESC
LIMIT 100;
QUERY PLAN
Limit (cost=89432.50..89432.75 rows=100 width=56) (actual time=2943.123..2943.189 rows=100 loops=1)
-> Sort (cost=89432.50..89682.50 rows=100000 width=56) (actual time=2943.121..2943.159 rows=100 loops=1)
Sort Key: (sum(o.total_cents)) DESC
Sort Method: top-N heapsort Memory: 33kB
-> HashAggregate (cost=82000.00..84500.00 rows=100000 width=56) (actual time=2801.234..2887.456 rows=247832 loops=1)
Group Key: u.id
-> Hash Left Join (cost=12500.00..78000.00 rows=800000 width=44) (actual time=123.456..2234.789 rows=1200000 loops=1)
Hash Cond: (o.user_id = u.id)
-> Seq Scan on orders o (cost=0.00..18000.00 rows=800000 width=12) (actual time=0.015..456.789 rows=800000 loops=1)
-> Hash (cost=10000.00..10000.00 rows=200000 width=36) (actual time=122.345..122.345 rows=247832 loops=1)
Buckets: 262144 Batches: 1 Memory Usage: 22MB
-> Seq Scan on users u (cost=0.00..10000.00 rows=200000 width=36) (actual time=0.012..89.234 rows=247832 loops=1)
Filter: ((country = 'US') AND (created_at > '2025-01-01'))
Rows Removed by Filter: 1752168
Claude Code’s analysis: “The problem is the Seq Scan on users with 1.75 million rows filtered out — only 247k kept. A composite index on (country, created_at) turns this into an Index Scan. The Seq Scan on orders is acceptable given we need all active orders.”
-- Fix: composite index matching the WHERE clause column order
CREATE INDEX CONCURRENTLY idx_users_country_created
ON users (country, created_at)
WHERE country IS NOT NULL; -- Partial index — skip NULL country rows
-- After creating the index, the plan changes to:
-- Index Scan on users (actual time=0.123..45.678 rows=247832 loops=1)
-- Index Cond: ((country = 'US') AND (created_at > '2025-01-01'))
-- New total: ~200ms instead of 3 seconds
Index Strategies
What indexes does this query need?
Walk me through the decision.
-- Query: find active users by company who haven't ordered recently
SELECT u.id, u.email, u.name, u.last_login
FROM users u
WHERE u.company_id = $1
AND u.status = 'active'
AND u.last_order_at < NOW() - INTERVAL '90 days'
AND u.email_subscribed = true
ORDER BY u.last_login DESC
LIMIT 50;
Claude Code’s index analysis:
-- Option 1: Index on most selective columns
CREATE INDEX idx_users_company_status
ON users (company_id, status)
WHERE status = 'active' AND email_subscribed = true;
-- Option 2: Covering index (avoids table lookup entirely)
CREATE INDEX idx_users_company_active_covering
ON users (company_id, last_order_at, last_login DESC)
INCLUDE (id, email, name) -- Include columns needed by SELECT
WHERE status = 'active' AND email_subscribed = true;
-- The covering index is better here:
-- 1. Partial index (WHERE clause) reduces index size significantly
-- 2. INCLUDE columns mean the query never touches the table (index-only scan)
-- 3. last_order_at in index lets the planner filter before sorting
-- Verify with EXPLAIN:
EXPLAIN (ANALYZE, BUFFERS)
SELECT ... [the query above];
-- Look for: Index Only Scan (no heap fetch) and 0 shared hit misses
Eliminating N+1 Queries
Loading a list of 50 blog posts takes 51 database queries.
Fix it.
// The N+1 problem
const posts = await db.query('SELECT * FROM posts LIMIT 50');
for (const post of posts) {
// One query per post — 50 additional queries!
post.author = await db.query(
'SELECT * FROM users WHERE id = $1', [post.author_id]
);
post.tagCount = await db.query(
'SELECT COUNT(*) FROM posts_tags WHERE post_id = $1', [post.id]
);
}
-- Fix: single JOIN query
SELECT
p.id, p.title, p.excerpt, p.published_at,
u.id as author_id, u.name as author_name, u.avatar_url,
COUNT(pt.tag_id) as tag_count
FROM posts p
JOIN users u ON u.id = p.author_id
LEFT JOIN posts_tags pt ON pt.post_id = p.id
WHERE p.status = 'published'
GROUP BY p.id, p.title, p.excerpt, p.published_at, u.id, u.name, u.avatar_url
ORDER BY p.published_at DESC
LIMIT 50;
For many-to-many with arrays needed:
-- Fetch tags as an array in one query (PostgreSQL array_agg)
SELECT
p.id, p.title, p.excerpt,
u.name as author_name,
array_agg(t.name ORDER BY t.name) FILTER (WHERE t.id IS NOT NULL) as tags
FROM posts p
JOIN users u ON u.id = p.author_id
LEFT JOIN posts_tags pt ON pt.post_id = p.id
LEFT JOIN tags t ON t.id = pt.tag_id
WHERE p.status = 'published'
GROUP BY p.id, p.title, p.excerpt, u.name
ORDER BY p.published_at DESC
LIMIT 50;
Common Query Rewrites
This correlated subquery is slow. Can it be rewritten?
-- Slow: correlated subquery (runs once per user row)
SELECT
u.id, u.name,
(SELECT MAX(created_at) FROM orders WHERE user_id = u.id) as last_order_at,
(SELECT COUNT(*) FROM orders WHERE user_id = u.id) as order_count
FROM users u
WHERE u.status = 'active';
-- Fast: lateral join (runs once, same result)
SELECT
u.id, u.name,
o.last_order_at,
o.order_count
FROM users u
JOIN LATERAL (
SELECT
MAX(created_at) as last_order_at,
COUNT(*) as order_count
FROM orders
WHERE user_id = u.id
) o ON true
WHERE u.status = 'active';
-- Faster still: window functions (no join at all)
SELECT DISTINCT
u.id, u.name,
MAX(o.created_at) OVER (PARTITION BY u.id) as last_order_at,
COUNT(o.id) OVER (PARTITION BY u.id) as order_count
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.status = 'active';
Partial Indexes for Common Filters
95% of our queries filter WHERE status = 'active'.
The status column has 5 possible values. Optimize this.
-- Full index: indexes all 5 million rows including 4.75 million non-active rows
CREATE INDEX idx_users_status ON users(status);
-- Partial index: only indexes the 250k active rows
CREATE INDEX idx_users_active ON users(email, last_login)
WHERE status = 'active';
-- 95% of queries use the partial index — 20x smaller, faster lookups
-- The 5% of queries for other statuses fall back to full table scan (acceptable)
For database migration patterns that let you add indexes without downtime, see the database migrations guide. For ORM-level optimization in Prisma and Drizzle, the Prisma guide covers select vs include and eager loading patterns. The Claude Skills 360 bundle includes SQL optimization skill sets for query analysis and index design. Start with the free tier to analyze a slow query from your application.