Claude Code for Cloudflare R2: S3-Compatible Object Storage at the Edge — Claude Skills 360 Blog
Blog / Backend / Claude Code for Cloudflare R2: S3-Compatible Object Storage at the Edge
Backend

Claude Code for Cloudflare R2: S3-Compatible Object Storage at the Edge

Published: February 8, 2027
Read time: 8 min read
By: Claude Skills 360

Cloudflare R2 provides S3-compatible object storage with zero egress fees — no charge for data served from R2 to the internet, only for storage and operations. R2 buckets bind directly to Workers as native Cloudflare bindings, enabling env.MY_BUCKET.get(key) and env.MY_BUCKET.put(key, body) without HTTP overhead. The S3-compatible API works with any @aws-sdk/client-s3 or aws4fetch client using R2’s endpoint and R2 API tokens. Pre-signed URLs let browsers upload directly to R2 without routing through your Worker. Multipart uploads handle files larger than 100MB. Event notifications via R2 Queue bindings trigger Workers on object create or delete. Claude Code generates R2 Worker handlers, multipart upload orchestration, pre-signed URL generation, S3 client configuration, and the wrangler.toml binding configurations for production R2 storage applications.

CLAUDE.md for Cloudflare R2

## Cloudflare R2 Stack
- Binding: env.BUCKET (R2Bucket) in Workers — no HTTP, direct native binding
- S3 compat: use aws4fetch with endpoint https://<ACCOUNT_ID>.r2.cloudflarestorage.com
- Pre-signed: AwsV4Signer from aws4fetch for browser direct uploads
- Multipart: bucket.createMultipartUpload → uploadPart → completeMultipartUpload
- Notifications: queue binding fires on r2ObjectCreated / r2ObjectDeleted events
- Public: set public access on bucket for CDN-served static assets
- Metadata: store file metadata in KV or D1 alongside R2 object keys

wrangler.toml Configuration

# wrangler.toml
name = "file-service"
main = "src/worker.ts"
compatibility_date = "2024-09-23"

[[r2_buckets]]
binding = "UPLOADS_BUCKET"
bucket_name = "my-uploads"
preview_bucket_name = "my-uploads-dev"

[[r2_buckets]]
binding = "ASSETS_BUCKET"
bucket_name = "my-assets"
preview_bucket_name = "my-assets-dev"

[[kv_namespaces]]
binding = "FILE_METADATA"
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

[[queues.consumers]]
queue = "r2-events"
max_batch_size = 10
max_batch_timeout = 5

Worker with R2 Binding

// src/worker.ts — R2 file operations in Cloudflare Worker
interface Env {
  UPLOADS_BUCKET: R2Bucket
  ASSETS_BUCKET: R2Bucket
  FILE_METADATA: KVNamespace
  R2_ACCOUNT_ID: string
  R2_ACCESS_KEY_ID: string
  R2_SECRET_ACCESS_KEY: string
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url)
    const { method } = request

    // GET /files/:key — serve file
    if (url.pathname.startsWith("/files/") && method === "GET") {
      const key = url.pathname.slice(7)
      return serveFile(env, key)
    }

    // POST /upload — handle multipart form upload
    if (url.pathname === "/upload" && method === "POST") {
      return handleUpload(request, env)
    }

    // POST /upload/initiate — create pre-signed URL for browser upload
    if (url.pathname === "/upload/initiate" && method === "POST") {
      return initiateDirectUpload(request, env)
    }

    // DELETE /files/:key
    if (url.pathname.startsWith("/files/") && method === "DELETE") {
      const key = url.pathname.slice(7)
      return deleteFile(env, key)
    }

    return new Response("Not found", { status: 404 })
  },
}

async function serveFile(env: Env, key: string): Promise<Response> {
  const object = await env.UPLOADS_BUCKET.get(key)

  if (!object) {
    return new Response("Not found", { status: 404 })
  }

  const headers = new Headers()
  object.writeHttpMetadata(headers)
  headers.set("etag", object.httpEtag)
  headers.set("Cache-Control", "public, max-age=31536000")

  return new Response(object.body, { headers })
}

async function handleUpload(request: Request, env: Env): Promise<Response> {
  const formData = await request.formData()
  const file = formData.get("file") as File

  if (!file) return new Response("No file", { status: 400 })

  const key = `uploads/${crypto.randomUUID()}/${file.name}`

  // For files > 100MB use multipart; for smaller files, direct put
  if (file.size > 100 * 1024 * 1024) {
    await multipartUpload(env.UPLOADS_BUCKET, key, file)
  } else {
    await env.UPLOADS_BUCKET.put(key, file.stream(), {
      httpMetadata: {
        contentType: file.type,
        contentDisposition: `attachment; filename="${file.name}"`,
      },
      customMetadata: {
        originalName: file.name,
        uploadedAt: new Date().toISOString(),
      },
    })
  }

  // Store metadata in KV
  await env.FILE_METADATA.put(key, JSON.stringify({
    key,
    name: file.name,
    size: file.size,
    contentType: file.type,
    uploadedAt: new Date().toISOString(),
  }), { expirationTtl: 60 * 60 * 24 * 365 })  // 1 year

  return Response.json({ key, url: `/files/${key}` }, { status: 201 })
}

Multipart Upload

// src/multipart.ts — R2 multipart for large files
export async function multipartUpload(
  bucket: R2Bucket,
  key: string,
  file: File
): Promise<void> {
  const PART_SIZE = 10 * 1024 * 1024  // 10MB parts

  // Initiate multipart upload
  const upload = await bucket.createMultipartUpload(key, {
    httpMetadata: { contentType: file.type },
    customMetadata: { originalName: file.name },
  })

  const parts: R2UploadedPart[] = []
  const buffer = await file.arrayBuffer()
  const partCount = Math.ceil(buffer.byteLength / PART_SIZE)

  // Upload parts in parallel (max 5 concurrent)
  const uploadPart = async (partNumber: number): Promise<R2UploadedPart> => {
    const start = (partNumber - 1) * PART_SIZE
    const end = Math.min(start + PART_SIZE, buffer.byteLength)
    const partData = buffer.slice(start, end)

    return await upload.uploadPart(partNumber, partData)
  }

  // Process in chunks of 5 concurrent parts
  for (let i = 0; i < partCount; i += 5) {
    const batch = Array.from(
      { length: Math.min(5, partCount - i) },
      (_, j) => uploadPart(i + j + 1)
    )
    const uploadedParts = await Promise.all(batch)
    parts.push(...uploadedParts)
  }

  await upload.complete(parts)
}

Pre-Signed URLs for Browser Uploads

// src/presigned.ts — generate pre-signed URLs with aws4fetch
import { AwsV4Signer } from "aws4fetch"

interface PresignedUrlOptions {
  accountId: string
  accessKeyId: string
  secretAccessKey: string
  bucket: string
  key: string
  contentType: string
  expiresIn?: number  // seconds
}

export async function generatePresignedUploadUrl(
  opts: PresignedUrlOptions
): Promise<{ uploadUrl: string; key: string }> {
  const {
    accountId, accessKeyId, secretAccessKey, bucket, key, contentType,
    expiresIn = 3600,
  } = opts

  const endpoint = `https://${accountId}.r2.cloudflarestorage.com`
  const url = `${endpoint}/${bucket}/${key}`

  const signer = new AwsV4Signer({
    url,
    method: "PUT",
    region: "auto",
    service: "s3",
    accessKeyId,
    secretAccessKey,
    headers: {
      "content-type": contentType,
    },
    signQuery: true,  // Pre-signed URL mode
  })

  const { url: signedUrl } = await signer.sign()

  // Add expiration
  const presignedUrl = new URL(signedUrl)
  presignedUrl.searchParams.set("X-Amz-Expires", String(expiresIn))

  return { uploadUrl: presignedUrl.toString(), key }
}

// API endpoint for generating upload URL
async function initiateDirectUpload(
  request: Request,
  env: Env
): Promise<Response> {
  const { fileName, contentType, size } = await request.json() as {
    fileName: string
    contentType: string
    size: number
  }

  if (size > 500 * 1024 * 1024) {  // 500MB limit
    return Response.json({ error: "File too large" }, { status: 413 })
  }

  const key = `uploads/${crypto.randomUUID()}/${fileName}`

  const { uploadUrl } = await generatePresignedUploadUrl({
    accountId: env.R2_ACCOUNT_ID,
    accessKeyId: env.R2_ACCESS_KEY_ID,
    secretAccessKey: env.R2_SECRET_ACCESS_KEY,
    bucket: "my-uploads",
    key,
    contentType,
    expiresIn: 3600,
  })

  return Response.json({ uploadUrl, key })
}

Queue Event Handler

// src/queue-handler.ts — process R2 event notifications
interface R2Event {
  action: "PutObject" | "DeleteObject" | "CopyObject"
  bucket: string
  object: { key: string; size: number; etag: string }
}

export default {
  async queue(batch: MessageBatch<R2Event>, env: Env): Promise<void> {
    for (const message of batch.messages) {
      const event = message.body

      if (event.action === "PutObject") {
        await handleNewUpload(event, env)
      } else if (event.action === "DeleteObject") {
        await handleDelete(event, env)
      }

      message.ack()
    }
  },
}

async function handleNewUpload(event: R2Event, env: Env) {
  const { key, size } = event.object

  // Trigger image processing for image uploads
  if (key.match(/\.(jpg|jpeg|png|webp)$/i)) {
    await env.IMAGE_PROCESSING_QUEUE.send({
      key,
      size,
      variants: ["thumbnail", "medium", "large"],
    })
  }

  console.log(`New upload processed: ${key} (${size} bytes)`)
}

For the AWS S3 alternative when using AWS infrastructure with more mature ecosystem tooling like Lambda triggers, S3 Select, and Intelligent-Tiering storage classes, see the S3 guide for pre-signed URL and lifecycle patterns. For the Supabase Storage alternative that adds R2-like object storage with Postgres RLS policies for access control on top of a PostgreSQL backend, the Supabase guide covers storage bucket policies. The Claude Skills 360 bundle includes Cloudflare R2 skill sets covering Worker bindings, multipart uploads, and pre-signed URLs. Start with the free tier to try R2 storage configuration generation.

Keep Reading

Backend

Claude Code for Bun: Fast JavaScript Runtime and Toolkit

Build with Bun and Claude Code — Bun.serve for HTTP servers, Bun.file for fast file I/O, Bun.$ for shell commands, Bun.sql for SQLite and PostgreSQL, Bun.build for bundling, bun:test for testing, Bun.hash for hashing, bun.lock for deterministic installs, bun run for package.json scripts, hot reloading with --hot, bun init for project scaffolding, and compatibility with Node.js modules.

6 min read Jun 13, 2027
Backend

Claude Code for Express.js Advanced: Patterns for Production APIs

Advanced Express.js patterns with Claude Code — typed request handlers with RequestHandler generics, async error handling middleware, Zod validation middleware factory, rate limiting with express-rate-limit and Redis store, helmet security middleware, compression, dependency injection with tsyringe, file upload with multer and S3, pagination utilities, JWT middleware, and structured logging with pino.

6 min read Jun 8, 2027
Backend

Claude Code for KeystoneJS: Node.js CMS and App Framework

Build full-stack apps with KeystoneJS and Claude Code — config with lists, fields.text and fields.relationship for schema definition, access control with isAuthenticated and isAdmin functions, hooks with beforeOperation and afterOperation, GraphQL API auto-generation from schema, AdminUI for content management, session with statelessSessions, Prisma adapter for database, file storage with images and files fields, and custom REST endpoints.

6 min read Jun 7, 2027

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free