Claude Code for Sharp: High-Performance Node.js Image Processing — Claude Skills 360 Blog
Blog / Backend / Claude Code for Sharp: High-Performance Node.js Image Processing
Backend

Claude Code for Sharp: High-Performance Node.js Image Processing

Published: April 25, 2027
Read time: 6 min read
By: Claude Skills 360

Sharp is the fastest Node.js image processing library wrapping libvips — sharp(input).resize(800, 600).webp({ quality: 80 }).toBuffer() chains transforms. resize(width, height, { fit: "cover" }) crops to exact dimensions. fit: "inside" preserves aspect ratio. fit: "contain" adds letterboxing. .toFormat("avif", { quality: 65 }) converts to AVIF for maximum compression. .metadata() returns width, height, format, and EXIF data without loading the full image. composite([{ input: logoBuffer, gravity: "southeast" }]) overlays watermarks. .blur(3) and .sharpen() apply filters. .rotate() auto-rotates from EXIF orientation. Chaining multiple .resize() and .pipe() creates processing pipelines. Sharp generates responsive image sets, thumbnails, and WebP/AVIF variants at upload time for fast CDN delivery. Claude Code generates Sharp pipelines, multi-format export utilities, upload processors, and Next.js image optimization API routes.

CLAUDE.md for Sharp

## Sharp Stack
- Version: sharp >= 0.33
- Resize: sharp(buf).resize(800, 600, { fit: "cover", position: "centre" })
- Format: .webp({ quality: 80 }) | .avif({ quality: 65 }) | .jpeg({ quality: 85, mozjpeg: true })
- Output: .toBuffer() | .toFile("output.webp") | .pipe(writeStream)
- Metadata: const { width, height, format } = await sharp(buf).metadata()
- Composite: .composite([{ input: watermarkBuf, gravity: "southeast", blend: "over" }])
- Limit: sharp.limitInputPixels(false) — disable 268MP pixel limit for large images
- Stream: sharp pipeline is streamable — sharp(readStream).resize(800).pipe(writeStream)

Image Processing Utilities

// lib/images/processing.ts — Sharp image pipeline utilities
import sharp from "sharp"

type ImageFormat = "webp" | "avif" | "jpeg" | "png"

type ResizedVariant = {
  buffer: Buffer
  width: number
  height: number
  format: ImageFormat
  sizeBytes: number
}

// Generate responsive image set for srcset
export async function generateResponsiveImages(
  inputBuffer: Buffer,
  options: {
    widths?: number[]
    formats?: ImageFormat[]
    quality?: number
  } = {},
): Promise<Map<string, ResizedVariant>> {
  const {
    widths = [320, 640, 960, 1280, 1920],
    formats = ["webp", "avif"],
    quality = 80,
  } = options

  const meta = await sharp(inputBuffer).metadata()
  const originalWidth = meta.width ?? 1920

  const results = new Map<string, ResizedVariant>()

  for (const format of formats) {
    for (const width of widths) {
      // Don't upscale — skip if wider than original
      if (width > originalWidth) continue

      const key = `${width}w.${format}`
      let pipeline = sharp(inputBuffer)
        .resize(width, null, {
          fit: "inside",
          withoutEnlargement: true,
          withoutReduction: false,
        })

      switch (format) {
        case "webp":  pipeline = pipeline.webp({ quality, effort: 4 }); break
        case "avif":  pipeline = pipeline.avif({ quality: quality - 15, effort: 4 }); break
        case "jpeg":  pipeline = pipeline.jpeg({ quality, mozjpeg: true }); break
        case "png":   pipeline = pipeline.png({ compressionLevel: 9 }); break
      }

      const buffer = await pipeline.toBuffer()
      const resizedMeta = await sharp(buffer).metadata()

      results.set(key, {
        buffer,
        width: resizedMeta.width ?? width,
        height: resizedMeta.height ?? 0,
        format,
        sizeBytes: buffer.byteLength,
      })
    }
  }

  return results
}

// Thumbnail for list views and previews
export async function createThumbnail(
  input: Buffer | string,
  size = 200,
): Promise<Buffer> {
  return sharp(input)
    .resize(size, size, {
      fit: "cover",
      position: "attention",  // Smart crop — finds faces/salient regions
    })
    .webp({ quality: 75 })
    .toBuffer()
}

// Watermark image
export async function addWatermark(
  imageBuffer: Buffer,
  watermarkBuffer: Buffer,
  options: {
    opacity?: number
    gravity?: string
    margin?: number
  } = {},
): Promise<Buffer> {
  const { opacity = 0.6, gravity = "southeast", margin = 20 } = options

  // Resize watermark to 15% of image width
  const meta = await sharp(imageBuffer).metadata()
  const wmarkWidth = Math.round((meta.width ?? 800) * 0.15)

  const processedWatermark = await sharp(watermarkBuffer)
    .resize(wmarkWidth, null, { fit: "inside" })
    // Apply opacity by compositing with transparent background
    .composite([{
      input: Buffer.from([0, 0, 0, Math.round(255 * (1 - opacity))]),
      raw: { width: 1, height: 1, channels: 4 },
      tile: true,
      blend: "dest-in",
    }])
    .toBuffer()

  return sharp(imageBuffer)
    .composite([{
      input: processedWatermark,
      gravity: gravity as sharp.Gravity,
      blend: "over",
    }])
    .toBuffer()
}

// Extract metadata safely
export async function getImageMetadata(input: Buffer | string) {
  const meta = await sharp(input).metadata()
  return {
    width: meta.width,
    height: meta.height,
    format: meta.format,
    sizeBytes: meta.size,
    hasAlpha: meta.hasAlpha,
    orientation: meta.orientation,
    aspectRatio: meta.width && meta.height
      ? +(meta.width / meta.height).toFixed(4)
      : null,
  }
}

// Convert and optimize a single image
export async function optimizeImage(
  input: Buffer,
  targetFormat: ImageFormat = "webp",
  maxWidth = 2000,
  quality = 85,
): Promise<{ buffer: Buffer; originalSizeBytes: number; optimizedSizeBytes: number }> {
  const originalSizeBytes = input.byteLength

  let pipeline = sharp(input)
    .resize(maxWidth, null, { fit: "inside", withoutEnlargement: true })
    .rotate()  // Auto-rotate from EXIF

  switch (targetFormat) {
    case "webp": pipeline = pipeline.webp({ quality }); break
    case "avif": pipeline = pipeline.avif({ quality: quality - 15 }); break
    case "jpeg": pipeline = pipeline.jpeg({ quality, mozjpeg: true }); break
    case "png":  pipeline = pipeline.png({ compressionLevel: 9 }); break
  }

  const buffer = await pipeline.toBuffer()
  return { buffer, originalSizeBytes, optimizedSizeBytes: buffer.byteLength }
}

Upload Processing Pipeline

// app/api/upload/route.ts — process and store uploaded images
import { NextRequest, NextResponse } from "next/server"
import { generateResponsiveImages, createThumbnail, getImageMetadata } from "@/lib/images/processing"
import { auth } from "@clerk/nextjs/server"
import { put } from "@vercel/blob"  // or your storage

export async function POST(request: NextRequest) {
  const { userId } = await auth()
  if (!userId) return NextResponse.json({ error: "Unauthorized" }, { status: 401 })

  const formData = await request.formData()
  const file = formData.get("file") as File | null

  if (!file) return NextResponse.json({ error: "No file" }, { status: 400 })

  // Validate file type
  if (!file.type.startsWith("image/")) {
    return NextResponse.json({ error: "File must be an image" }, { status: 400 })
  }

  // Max 10MB
  if (file.size > 10 * 1024 * 1024) {
    return NextResponse.json({ error: "Image must be under 10MB" }, { status: 400 })
  }

  const buffer = Buffer.from(await file.arrayBuffer())

  try {
    // Get metadata
    const meta = await getImageMetadata(buffer)

    // Generate variants
    const [thumbnail, variants] = await Promise.all([
      createThumbnail(buffer, 200),
      generateResponsiveImages(buffer, { widths: [640, 1280, 1920], formats: ["webp"] }),
    ])

    // Upload all variants in parallel
    const uploads = await Promise.all([
      put(`images/${userId}/thumbnail.webp`, thumbnail, { access: "public" }),
      ...Array.from(variants.entries()).map(([key, variant]) =>
        put(`images/${userId}/${key}`, variant.buffer, { access: "public" }),
      ),
    ])

    const [thumbnailUrl, ...variantUrls] = uploads.map(u => u.url)

    return NextResponse.json({
      thumbnailUrl,
      variants: Array.from(variants.keys()).map((key, i) => ({
        key,
        url: variantUrls[i],
        ...variants.get(key)!,
        buffer: undefined,  // Don't return buffer
      })),
      metadata: meta,
    })
  } catch (err) {
    console.error("[Upload]", err)
    return NextResponse.json({ error: "Image processing failed" }, { status: 500 })
  }
}

For the Cloudinary alternative when a managed image CDN with on-the-fly transformations via URL parameters (/w_800,f_webp/) eliminates server-side processing entirely — Cloudinary’s URL-based API delivers responsive images without storing multiple variants, though it requires a third-party service with per-transformation pricing, see the Cloudinary guide. For the Vercel Image Optimization alternative when hosting on Vercel and the built-in next/image component handles resizing, format conversion, and CDN caching automatically — <Image> with sizes generates responsive images without any Sharp pipeline code, see the next/image guide. The Claude Skills 360 bundle includes Sharp skill sets covering resizing, format conversion, and responsive image generation. Start with the free tier to try image processing generation.

Keep Reading

Backend

Claude Code for Bun: Fast JavaScript Runtime and Toolkit

Build with Bun and Claude Code — Bun.serve for HTTP servers, Bun.file for fast file I/O, Bun.$ for shell commands, Bun.sql for SQLite and PostgreSQL, Bun.build for bundling, bun:test for testing, Bun.hash for hashing, bun.lock for deterministic installs, bun run for package.json scripts, hot reloading with --hot, bun init for project scaffolding, and compatibility with Node.js modules.

6 min read Jun 13, 2027
Backend

Claude Code for Express.js Advanced: Patterns for Production APIs

Advanced Express.js patterns with Claude Code — typed request handlers with RequestHandler generics, async error handling middleware, Zod validation middleware factory, rate limiting with express-rate-limit and Redis store, helmet security middleware, compression, dependency injection with tsyringe, file upload with multer and S3, pagination utilities, JWT middleware, and structured logging with pino.

6 min read Jun 8, 2027
Backend

Claude Code for KeystoneJS: Node.js CMS and App Framework

Build full-stack apps with KeystoneJS and Claude Code — config with lists, fields.text and fields.relationship for schema definition, access control with isAuthenticated and isAdmin functions, hooks with beforeOperation and afterOperation, GraphQL API auto-generation from schema, AdminUI for content management, session with statelessSessions, Prisma adapter for database, file storage with images and files fields, and custom REST endpoints.

6 min read Jun 7, 2027

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free