Edge computing runs your logic closer to users — Cloudflare Workers execute in 300+ locations worldwide, with sub-millisecond cold starts and global distribution. Claude Code generates Workers code correctly: understanding the fetch handler model, Durable Objects for coordination, KV for eventual-consistent storage, and the limits that make edge different from Node.js.
This guide covers Cloudflare Workers with Claude Code: API handlers, Durable Objects for stateful edge logic, KV storage, and R2 object storage.
CLAUDE.md for Cloudflare Workers
## Cloudflare Workers Stack
- Workers runtime (V8 isolates — NOT Node.js)
- Storage: KV for config/sessions, Durable Objects for stateful coordination, R2 for files
- D1 for SQLite at the edge (preview)
- Wrangler for local dev and deployment
## Workers Constraints (different from Node.js)
- No filesystem access
- CPU time limit: 50ms on free tier, 30s on paid
- No long polling — use WebSockets or SSE for streaming
- KV is eventually consistent (replicated globally, writes propagate in ~60s)
- Durable Objects are strongly consistent but single-region
## Patterns
- Fetch handler: async function handler(req, env, ctx) — always return Response
- ctx.waitUntil() for fire-and-forget work after response sent
- Cache API for caching responses at edge
Basic Worker Handler
Create a Cloudflare Worker that serves as an API proxy with
authentication, rate limiting via KV, and response caching.
// src/worker.ts
export interface Env {
API_KEY_SALT: string;
RATE_LIMIT_KV: KVNamespace;
CACHE_KV: KVNamespace;
UPSTREAM_URL: string;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url);
// Health check — no auth needed
if (url.pathname === '/health') {
return new Response('OK', { status: 200 });
}
// Authenticate request
const authResult = await authenticate(request, env);
if (!authResult.ok) {
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
status: 401,
headers: { 'Content-Type': 'application/json' },
});
}
// Rate limiting
const rateLimitResult = await checkRateLimit(authResult.userId, env);
if (!rateLimitResult.allowed) {
return new Response(
JSON.stringify({ error: 'Rate limit exceeded', retryAfter: rateLimitResult.retryAfter }),
{
status: 429,
headers: {
'Content-Type': 'application/json',
'Retry-After': String(rateLimitResult.retryAfter),
'X-RateLimit-Limit': '100',
'X-RateLimit-Remaining': '0',
},
},
);
}
// Check cache for GET requests
if (request.method === 'GET') {
const cacheKey = `${authResult.userId}:${url.pathname}${url.search}`;
const cached = await env.CACHE_KV.get(cacheKey, 'json');
if (cached) {
return new Response(JSON.stringify(cached), {
status: 200,
headers: { 'Content-Type': 'application/json', 'X-Cache': 'HIT' },
});
}
}
// Proxy to upstream
const upstreamRequest = new Request(
`${env.UPSTREAM_URL}${url.pathname}${url.search}`,
{
method: request.method,
headers: {
...Object.fromEntries(request.headers),
'X-User-Id': authResult.userId,
// Don't forward the original auth header — replace with service auth
'Authorization': `Bearer ${env.API_KEY_SALT}`,
},
body: request.method !== 'GET' ? request.body : undefined,
},
);
const response = await fetch(upstreamRequest);
const responseBody = await response.json();
// Cache successful GET responses
if (request.method === 'GET' && response.ok) {
const cacheKey = `${authResult.userId}:${url.pathname}${url.search}`;
// waitUntil: cache write happens after response is sent
ctx.waitUntil(
env.CACHE_KV.put(cacheKey, JSON.stringify(responseBody), { expirationTtl: 60 }),
);
}
return new Response(JSON.stringify(responseBody), {
status: response.status,
headers: { 'Content-Type': 'application/json', 'X-Cache': 'MISS' },
});
},
};
async function authenticate(request: Request, env: Env): Promise<{ ok: true; userId: string } | { ok: false }> {
const token = request.headers.get('Authorization')?.replace('Bearer ', '');
if (!token) return { ok: false };
// Validate token against KV (tokens stored at creation time)
const userId = await env.RATE_LIMIT_KV.get(`token:${token}`);
if (!userId) return { ok: false };
return { ok: true, userId };
}
async function checkRateLimit(
userId: string,
env: Env,
): Promise<{ allowed: true } | { allowed: false; retryAfter: number }> {
const key = `ratelimit:${userId}:${Math.floor(Date.now() / 60000)}`; // Per-minute window
const current = parseInt(await env.RATE_LIMIT_KV.get(key) ?? '0');
const limit = 100;
if (current >= limit) {
const retryAfter = 60 - (Date.now() % 60000) / 1000;
return { allowed: false, retryAfter: Math.ceil(retryAfter) };
}
// Increment counter — fire and forget
await env.RATE_LIMIT_KV.put(key, String(current + 1), { expirationTtl: 120 });
return { allowed: true };
}
Durable Objects for Stateful Coordination
Build a Durable Object that manages a WebSocket room.
Multiple users connect, messages are broadcast to all.
State persists if the worker restarts.
// src/chat-room.ts
export class ChatRoom implements DurableObject {
private sessions: Map<string, WebSocket> = new Map();
private history: { user: string; message: string; timestamp: number }[] = [];
constructor(private state: DurableObjectState, private env: Env) {
// Restore sessions is not possible (WebSockets are new per process)
// But we CAN restore history from durable storage
this.state.blockConcurrencyWhile(async () => {
this.history = (await this.state.storage.get('history')) ?? [];
});
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
if (request.headers.get('Upgrade') !== 'websocket') {
return new Response('Expected WebSocket', { status: 426 });
}
const userId = url.searchParams.get('userId');
if (!userId) return new Response('Missing userId', { status: 400 });
const [client, server] = Object.values(new WebSocketPair()) as [WebSocket, WebSocket];
this.state.acceptWebSocket(server);
this.sessions.set(userId, server);
// Send history to new joiner
server.send(JSON.stringify({ type: 'history', messages: this.history.slice(-50) }));
// Broadcast join notification
this.broadcast(userId, { type: 'user_joined', userId }, userId);
server.addEventListener('message', async (evt) => {
const data = JSON.parse(evt.data as string);
if (data.type === 'message') {
const entry = { user: userId, message: data.text, timestamp: Date.now() };
this.history.push(entry);
// Persist history (last 200 messages)
if (this.history.length > 200) this.history = this.history.slice(-200);
await this.state.storage.put('history', this.history);
// Broadcast to all connected clients
this.broadcast(userId, { type: 'message', ...entry });
}
});
server.addEventListener('close', () => {
this.sessions.delete(userId);
this.broadcast(userId, { type: 'user_left', userId }, userId);
});
return new Response(null, { status: 101, webSocket: client });
}
private broadcast(fromUserId: string, message: object, excludeUserId?: string) {
const payload = JSON.stringify(message);
for (const [id, ws] of this.sessions) {
if (id !== excludeUserId) {
try {
ws.send(payload);
} catch {
this.sessions.delete(id);
}
}
}
}
}
// wrangler.toml
name = "my-app"
main = "src/worker.ts"
compatibility_date = "2024-01-01"
[[durable_objects.bindings]]
name = "CHAT_ROOM"
class_name = "ChatRoom"
[[migrations]]
tag = "v1"
new_classes = ["ChatRoom"]
[[kv_namespaces]]
binding = "RATE_LIMIT_KV"
id = "..."
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "my-uploads"
R2 for Object Storage
// Upload to R2 and serve via signed URL
async function handleUpload(request: Request, env: Env): Promise<Response> {
const formData = await request.formData();
const file = formData.get('file') as File;
if (!file) return new Response('No file', { status: 400 });
const key = `uploads/${crypto.randomUUID()}/${file.name}`;
// Upload to R2
await env.UPLOADS.put(key, file.stream(), {
httpMetadata: {
contentType: file.type,
cacheControl: 'public, max-age=31536000, immutable',
},
customMetadata: {
uploadedBy: 'user-id',
originalName: file.name,
},
});
// Return public URL (requires R2 public access or custom domain)
return new Response(
JSON.stringify({ key, url: `https://cdn.example.com/${key}` }),
{ headers: { 'Content-Type': 'application/json' } },
);
}
For comparing Cloudflare Workers with AWS Lambda and Vercel Edge Functions, see the serverless guide. For the Claude Code skills running on Cloudflare (the backbone of claudeskills360.com), this site itself is a Cloudflare Pages + Workers deployment. The Claude Skills 360 bundle includes edge computing skill sets for Workers, Durable Objects, and KV patterns. Start with the free tier to try edge function code generation.