Claude Code for Database Sharding: Horizontal Partitioning, Routing, and Migration — Claude Skills 360 Blog
Blog / Architecture / Claude Code for Database Sharding: Horizontal Partitioning, Routing, and Migration
Architecture

Claude Code for Database Sharding: Horizontal Partitioning, Routing, and Migration

Published: August 22, 2026
Read time: 9 min read
By: Claude Skills 360

Database sharding distributes data across multiple database instances, allowing write throughput and storage to scale horizontally beyond what a single server can handle. Claude Code helps design shard key selection, implement application-level routing, handle cross-shard queries, and plan the migration from a single database to a sharded setup without downtime.

This guide covers database sharding with Claude Code: shard key design, routing layer implementation, cross-shard queries, and migration strategy.

When to Shard

How do I know when I need to shard my PostgreSQL database?
What are the alternatives to try first?

Shard only after exhausting these options in order:

  1. Read replicas — read-heavy load, easy to add, no write benefit
  2. Connection pooling (PgBouncer/pgpool) — connection exhaustion at high concurrency
  3. Vertical scaling — up to 32vCPU/256GB RAM instances work for most apps
  4. Table partitioning — PostgreSQL native, partition pruning, single connection string
  5. Caching (Redis) — reduces DB load significantly for read patterns

Shard when: write throughput exceeds what a single primary can handle, OR data size grows past what a single instance manages cost-effectively (typically > 2-5TB for hot data).

Shard Key Design

Our orders table has 500M rows and writes are 50k/sec.
Help me design the sharding strategy.
Design the shard key and explain the tradeoffs.

Shard key criteria:

  1. High cardinality — enough distinct values to distribute evenly
  2. Immutable — never changes after record creation (resharding is painful)
  3. Present in most queries — avoids cross-shard scatter/gather
  4. Even distribution — avoids hot shards
Orders table. Access patterns:
- 90% of queries are: WHERE user_id = ? (and optional date filter)
- 8% are: WHERE order_id = ? (single-order lookup)  
- 2% are admin reports across all users

Best shard key: user_id
- All orders for a user land on the same shard
- 90% of queries hit exactly 1 shard
- Even distribution (millions of users)
- Immutable on orders

Alternative: (user_id % num_shards) hash — predictable, no directory lookup needed

Shard Router Implementation

// src/db/shard-router.ts
import { Pool, PoolConfig } from 'pg';

interface ShardConfig {
  id: number;
  connectionString: string;
  range?: [number, number]; // For range-based sharding
}

export class ShardRouter {
  private pools: Map<number, Pool> = new Map();
  private shards: ShardConfig[];
  private numShards: number;

  constructor(shards: ShardConfig[]) {
    this.shards = shards;
    this.numShards = shards.length;

    for (const shard of shards) {
      this.pools.set(shard.id, new Pool({
        connectionString: shard.connectionString,
        max: 20, // Per-shard pool
        idleTimeoutMillis: 30000,
      }));
    }
  }

  // Hash-based routing: consistent, no directory required
  getShardIdForUser(userId: string): number {
    // FNV-1a hash for good distribution
    let hash = 2166136261;
    for (let i = 0; i < userId.length; i++) {
      hash ^= userId.charCodeAt(i);
      hash = (hash * 16777619) >>> 0; // uint32
    }
    return hash % this.numShards;
  }

  // Get pool for a user's queries
  getPoolForUser(userId: string): Pool {
    const shardId = this.getShardIdForUser(userId);
    const pool = this.pools.get(shardId);
    if (!pool) throw new Error(`Shard ${shardId} not found`);
    return pool;
  }

  // For order_id lookups: order_id encodes shard_id at creation
  // Order ID format: {shard_id_2chars}{timestamp_8chars}{random_4chars}
  getPoolForOrderId(orderId: string): Pool {
    const shardId = parseInt(orderId.substring(0, 2), 16) % this.numShards;
    const pool = this.pools.get(shardId);
    if (!pool) throw new Error(`Shard ${shardId} not found`);
    return pool;
  }

  // Fan-out to all shards (for admin queries, use sparingly)
  async queryAll<T>(query: string, params: any[] = []): Promise<T[]> {
    const results = await Promise.all(
      Array.from(this.pools.values()).map(pool =>
        pool.query<T>(query, params).then(r => r.rows),
      ),
    );
    return results.flat();
  }

  // Generate shard-aware order ID
  generateOrderId(userId: string): string {
    const shardId = this.getShardIdForUser(userId);
    const shardHex = shardId.toString(16).padStart(2, '0');
    const timestamp = Date.now().toString(16).padStart(8, '0');
    const random = Math.floor(Math.random() * 0xFFFF).toString(16).padStart(4, '0');
    return `${shardHex}${timestamp}${random}`;
  }

  async close(): Promise<void> {
    await Promise.all(Array.from(this.pools.values()).map(p => p.end()));
  }
}
// src/repositories/orders.repository.ts
export class OrdersRepository {
  constructor(private router: ShardRouter) {}

  async create(userId: string, data: CreateOrderData): Promise<Order> {
    const pool = this.router.getPoolForUser(userId);
    const orderId = this.router.generateOrderId(userId);

    const { rows } = await pool.query<Order>(
      `INSERT INTO orders (id, user_id, status, total_cents, created_at)
       VALUES ($1, $2, 'pending', $3, NOW())
       RETURNING *`,
      [orderId, userId, data.totalCents],
    );
    return rows[0];
  }

  async findByUser(userId: string, opts: PaginationOpts): Promise<Order[]> {
    // All on one shard — fast single-shard query
    const pool = this.router.getPoolForUser(userId);
    const { rows } = await pool.query<Order>(
      `SELECT * FROM orders WHERE user_id = $1 ORDER BY created_at DESC LIMIT $2 OFFSET $3`,
      [userId, opts.limit, opts.offset],
    );
    return rows;
  }

  async findById(orderId: string): Promise<Order | null> {
    // Shard encoded in order ID — single shard lookup
    const pool = this.router.getPoolForOrderId(orderId);
    const { rows } = await pool.query<Order>(
      `SELECT * FROM orders WHERE id = $1`,
      [orderId],
    );
    return rows[0] ?? null;
  }

  async getRevenueReport(startDate: Date, endDate: Date): Promise<RevenueReport> {
    // Fan-out required — admin query across all shards
    const results = await this.router.queryAll<{ total: string; count: string }>(
      `SELECT SUM(total_cents) as total, COUNT(*) as count
       FROM orders WHERE created_at BETWEEN $1 AND $2 AND status = 'completed'`,
      [startDate, endDate],
    );

    return {
      totalRevenueCents: results.reduce((sum, r) => sum + parseInt(r.total || '0'), 0),
      orderCount: results.reduce((sum, r) => sum + parseInt(r.count || '0'), 0),
    };
  }
}

PostgreSQL Native Partitioning (Alternative to Application Sharding)

For tables that need scale but live within one PostgreSQL cluster:

-- Range partitioning by date — for time-series data
CREATE TABLE orders (
  id UUID NOT NULL,
  user_id UUID NOT NULL,
  status TEXT NOT NULL,
  total_cents INTEGER NOT NULL,
  created_at TIMESTAMPTZ NOT NULL
) PARTITION BY RANGE (created_at);

-- Create partitions per quarter
CREATE TABLE orders_2025_q1 PARTITION OF orders
  FOR VALUES FROM ('2025-01-01') TO ('2025-04-01');

CREATE TABLE orders_2025_q2 PARTITION OF orders
  FOR VALUES FROM ('2025-04-01') TO ('2025-07-01');

-- Indexes on partitions (auto-propagated)
CREATE INDEX ON orders (user_id, created_at DESC);

-- Auto-create partitions with pg_partman
SELECT partman.create_parent(
  p_parent_table := 'public.orders',
  p_control := 'created_at',
  p_interval := 'quarterly',
  p_start_partition := '2025-01-01'
);

Migration Strategy: Monolith to Sharded

We need to migrate 500M rows from a single PostgreSQL to 4 shards
with zero downtime. What's the procedure?

Zero-downtime migration in phases:

Phase 1 (Week 1-2): Dual-write
- Deploy application that writes to BOTH old DB and new sharded DBs
- New reads still go to old DB
- New writes go to both

Phase 2 (Week 2-3): Backfill historical data
- Streaming backfill: read old DB in batches, insert into shards
- Use logical replication slot to capture changes during backfill
- Monitor replication lag — backfill must complete before too much lag builds

Phase 3 (Week 3): Cut reads
- Deploy reads to new sharded DBs
- Monitor query patterns, latency, error rates
- Old DB still receives writes as fallback

Phase 4 (Week 4): Cut writes
- Stop dual-write
- All traffic now on sharded DBs
- Keep old DB in read-only mode for 1 week as rollback option

Phase 5 (Week 5): Decommission
- Verify no traffic to old DB
- Archive and decommission

For database migration patterns (zero-downtime column renames, index creation), see the database migrations guide. For event streaming with Debezium to capture changes during migrations, see the CDC guide. The Claude Skills 360 bundle includes database architecture skill sets covering sharding, partitioning, and read replica patterns. Start with the free tier to try database design code generation.

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free