Most microservices migrations fail not because the pattern is wrong, but because teams try to do it all at once. Rewriting a large monolith into dozens of services simultaneously creates a multi-year project where nothing works until everything works. The Strangler Fig pattern solves this: extract one bounded context at a time, prove it in production, then move the next one. Claude Code helps identify extraction boundaries, generate the scaffolding, and maintain the routing layer that makes the migration invisible to clients.
Before You Migrate: Is It Worth It?
Our Rails monolith has 200k lines of code. The team wants microservices.
When is this actually a good idea vs. premature complexity?
Good reasons to migrate:
- Different components need different scaling (checkout needs 100x more capacity than admin)
- Teams are blocked by code ownership — 5 teams all making changes to the same database
- Deployment coupling: you need to deploy accounting to fix a bug in auth
- Technology lock-in: one part needs a Go performance profile, another needs ML Python
Bad reasons (that feel compelling):
- “Tech debt” — microservices don’t solve messy code, they distribute it
- “Scalability” — most monoliths scale fine; add read replicas and caches first
- “Modern architecture” — distributed systems are harder to debug and deploy
If you do migrate: start with the Strangler Fig. Never rewrite everything at once.
Phase 1: Map the Domain Boundaries
Analyze this Rails monolith and identify bounded contexts
suitable for extraction. Focus on natural seams in the data model.
Claude Code reads your codebase and identifies:
- Database tables that are primarily accessed by one business domain
- Service objects that don’t cross domain boundaries
- API endpoints that map cleanly to a subdomain
Read the models/ directory and identify which tables could be owned
by an independent service. Show the dependency graph.
Typical output from a Rails e-commerce monolith:
## Domain Analysis
### Clean extraction candidates (minimal coupling):
**Inventory Service**
- Tables: inventory_items, warehouses, stock_moves
- Only joins: products (via product_id FK) — can be replaced with API call
- Current consumers: 2 controllers, 1 background job
- No circular dependencies
**Shipping Service**
- Tables: shipments, shipping_addresses, carriers
- Only joins: orders (order_id FK)
- External API: FedEx/UPS integration already isolated in ShippingCalculator class
### Complex (extract later):
**Order Service** — depends on: customers, products, inventory, payments, shipping
Extract AFTER separating inventory and payments
**Auth/Customer** — too entangled with every other domain
Extract LAST or keep in monolith
Phase 2: Strangler Fig Routing Layer
We're extracting the inventory service first.
Set up routing so both the monolith and new service work in parallel.
# nginx.conf — routes traffic between monolith and extracted services
upstream monolith {
server rails-app:3000;
}
upstream inventory_service {
server inventory-service:8080;
}
server {
listen 80;
# New inventory service handles its own endpoints
location ~ ^/api/v1/inventory(.*) {
proxy_pass http://inventory_service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Fallback to monolith if new service is down (during migration)
error_page 502 503 = @monolith_fallback;
}
# Everything else goes to monolith
location / {
proxy_pass http://monolith;
}
location @monolith_fallback {
proxy_pass http://monolith;
}
}
For application-layer routing (Cloudflare Workers, feature flags):
// Anti-corruption layer: translates between old and new domain models
// Located in the monolith, called instead of directly hitting inventory tables
class InventoryAdapter {
private useNewService: boolean;
constructor() {
// Feature flag: gradually roll out traffic to new service
this.useNewService = FeatureFlags.isEnabled('inventory_service_enabled');
}
async getStockLevel(productId: string): Promise<number> {
if (this.useNewService) {
try {
const response = await fetch(`${process.env.INVENTORY_SERVICE_URL}/stock/${productId}`);
const data = await response.json();
return data.quantityAvailable;
} catch (err) {
// Shadow mode: fall back to monolith DB query on error
logger.warn('Inventory service unavailable, using monolith', { productId, err });
return this.getStockFromMonolith(productId);
}
}
return this.getStockFromMonolith(productId);
}
private async getStockFromMonolith(productId: string): Promise<number> {
const item = await InventoryItem.findByProductId(productId);
return item?.quantityOnHand ?? 0;
}
}
Phase 3: Data Decomposition
The hard part: extracting the database.
How do we move inventory tables out of the monolith's database
without downtime?
Step 1: Dual-write period (no downtime)
# In monolith: write to both old and new location simultaneously
class InventoryItem < ApplicationRecord
after_create :sync_to_inventory_service
after_update :sync_to_inventory_service
private
def sync_to_inventory_service
return unless FeatureFlags.enabled?(:inventory_dual_write)
InventoryServiceSync.perform_async(id, 'upsert', attributes)
rescue => e
# Never fail the primary transaction due to sync errors
Rails.logger.error("Inventory sync failed: #{e.message}")
end
end
Step 2: Backfill historical data
# scripts/backfill_inventory.py — migrate existing data
import psycopg2
import httpx
from datetime import datetime
def backfill_inventory_items(batch_size=500, start_id=0):
monolith_conn = psycopg2.connect(MONOLITH_DB_URL)
client = httpx.Client(base_url=INVENTORY_SERVICE_URL)
cursor = monolith_conn.cursor()
last_id = start_id
while True:
cursor.execute("""
SELECT id, product_id, quantity_on_hand, warehouse_id, updated_at
FROM inventory_items
WHERE id > %s
ORDER BY id
LIMIT %s
""", (last_id, batch_size))
rows = cursor.fetchall()
if not rows:
print("Backfill complete")
break
# Bulk upsert to new service
response = client.post('/admin/bulk-upsert', json={
'items': [
{
'id': str(row[0]),
'product_id': str(row[1]),
'quantity_available': row[2],
'warehouse_id': str(row[3]),
'synced_at': row[4].isoformat(),
}
for row in rows
]
})
response.raise_for_status()
last_id = rows[-1][0]
print(f"Backfilled {len(rows)} items, last ID: {last_id}")
Step 3: Cutover — read from new service, stop dual-write
# In monolith: after verifying data parity
FeatureFlags.enable(:inventory_service_reads) # Read from new service
FeatureFlags.disable(:inventory_monolith_writes) # Stop writing to monolith DB
# Monitor for 7 days, then:
# DROP TABLE inventory_items; -- from monolith DB
Avoiding the Distributed Monolith
After 6 months, our 8 microservices are tightly coupled.
Deploying any service requires coordinating 4 others.
What went wrong?
Signs you’ve built a distributed monolith:
-
Synchronous chains: order-service → inventory-service → warehouse-service → shipping-service. One slow service stalls the whole chain.
-
Shared database tables: Two services reading and writing the same tables. Same coupling as a monolith, now with network overhead.
-
Deployment dance: “We need to deploy inventory-service before order-service after payment-service.” True microservices deploy independently.
Fix: event-driven decoupling
// ❌ Synchronous coupling — order service calls inventory directly
async function createOrder(items: Item[]) {
await inventoryService.reserve(items); // Blocks on network call
await paymentService.charge(total); // Blocks on network call
await warehouseService.notify(orderId); // Blocks on network call
}
// ✅ Event-driven — publish and let services react
async function createOrder(items: Item[]) {
const order = await db.createOrder(items);
// Publish event — services subscribe and react asynchronously
await events.publish('order.created', {
orderId: order.id,
items: order.items,
customerId: order.customerId,
});
// Return immediately — downstream services handle their own concerns
return order;
}
For the event-driven patterns that replace synchronous microservice calls, see the event-driven architecture guide. For the Kafka-based event streaming that connects decomposed services, the Kafka guide covers the implementation. The Claude Skills 360 bundle includes architecture skill sets for migration planning, Strangler Fig implementation, and domain boundary analysis. Start with the free tier to try domain decomposition prompts.