Claude Code for Diffusers: Stable Diffusion and Image Generation — Claude Skills 360 Blog
Blog / AI / Claude Code for Diffusers: Stable Diffusion and Image Generation
AI

Claude Code for Diffusers: Stable Diffusion and Image Generation

Published: September 30, 2027
Read time: 5 min read
By: Claude Skills 360

Diffusers runs state-of-the-art image generation models. pip install diffusers transformers accelerate. Text-to-image: from diffusers import StableDiffusionPipeline, pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda"), image = pipe("a photo of an astronaut on mars").images[0]. SDXL: from diffusers import StableDiffusionXLPipeline, pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16). Negative prompt: pipe(prompt, negative_prompt="blurry, low quality", num_inference_steps=30, guidance_scale=7.5, height=1024, width=1024). Img2Img: from diffusers import StableDiffusionImg2ImgPipeline. pipe(prompt=prompt, image=init_image, strength=0.75). Inpainting: StableDiffusionInpaintPipeline with image and mask_image. ControlNet: controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_canny"), pipe = StableDiffusionControlNetPipeline.from_pretrained(..., controlnet=controlnet), pipe(prompt, image=canny_image, controlnet_conditioning_scale=0.5). Scheduler swap: pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config). FLUX: from diffusers import FluxPipeline, pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16). Memory: pipe.enable_attention_slicing(), pipe.enable_sequential_cpu_offload() (CPU+GPU). LoRA: pipe.load_lora_weights("path/to/lora.safetensors", adapter_name="style"), pipe.set_adapters(["style"], adapter_weights=[0.8]). Compile: pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True). Claude Code generates Diffusers inference pipelines, ControlNet workflows, LoRA loading, memory optimization, and custom diffusion loops.

CLAUDE.md for Diffusers

## Diffusers Stack
- Version: diffusers >= 0.30, transformers >= 4.40, accelerate >= 0.30
- SD 1.5: StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=float16)
- SDXL: StableDiffusionXLPipeline — 1024x1024 default, supports refiner pipeline
- FLUX: FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev") — best quality
- ControlNet: ControlNetModel + StableDiffusionControlNetPipeline or SD3ControlNetPipeline
- Memory: enable_attention_slicing() → enable_sequential_cpu_offload() → enable_xformers
- LoRA: pipe.load_lora_weights(path) → pipe.set_adapters(names, weights)
- Scheduler: pipe.scheduler = EulerAnc/DPMSolverMultistep.from_config(pipe.scheduler.config)

Image Generation Pipeline

# diffusion/generate.py — Diffusers image generation with all major pipelines
from __future__ import annotations
import io
from pathlib import Path
from typing import Optional

import torch
from PIL import Image


# ── 1. Text-to-image with SDXL ────────────────────────────────────────────────

def load_sdxl_pipeline(
    model_id:  str = "stabilityai/stable-diffusion-xl-base-1.0",
    device:    str = "cuda",
    optimize:  bool = True,
):
    """Load SDXL pipeline with optional memory optimizations."""
    from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler

    pipe = StableDiffusionXLPipeline.from_pretrained(
        model_id,
        torch_dtype=torch.float16,
        use_safetensors=True,
        variant="fp16",
    )

    # Use Euler Ancestral scheduler (faster, slightly less sharp)
    pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(
        pipe.scheduler.config
    )

    if optimize:
        pipe.enable_attention_slicing()                         # 30% less VRAM
        try:
            pipe.enable_xformers_memory_efficient_attention()  # xFormers if available
        except Exception:
            pass

    pipe = pipe.to(device)
    return pipe


def generate_sdxl(
    pipe,
    prompt:          str,
    negative_prompt: str  = "blurry, bad quality, low resolution, ugly, deformed",
    num_images:      int  = 1,
    steps:           int  = 30,
    guidance_scale:  float = 7.5,
    width:           int  = 1024,
    height:          int  = 1024,
    seed:            Optional[int] = None,
) -> list[Image.Image]:
    generator = torch.Generator("cuda").manual_seed(seed) if seed is not None else None

    output = pipe(
        prompt=prompt,
        negative_prompt=negative_prompt,
        num_images_per_prompt=num_images,
        num_inference_steps=steps,
        guidance_scale=guidance_scale,
        width=width,
        height=height,
        generator=generator,
    )
    return output.images


# ── 2. FLUX pipeline (state-of-the-art) ──────────────────────────────────────

def load_flux_pipeline(
    model_id: str = "black-forest-labs/FLUX.1-schnell",  # schnell=fast, dev=quality
    device:   str = "cuda",
):
    """FLUX.1 — best text-to-image quality as of 2024."""
    from diffusers import FluxPipeline

    pipe = FluxPipeline.from_pretrained(
        model_id,
        torch_dtype=torch.bfloat16,
    )
    # Offload text encoder and VAE to CPU to save 12GB VRAM
    pipe.enable_sequential_cpu_offload()
    return pipe


def generate_flux(
    pipe,
    prompt:     str,
    steps:      int   = 4,     # Schnell is optimized for 4 steps
    guidance:   float = 0.0,   # Schnell is guidance-distilled
    width:      int   = 1024,
    height:     int   = 1024,
    seed:       Optional[int] = None,
) -> Image.Image:
    generator = torch.Generator("cpu").manual_seed(seed) if seed else None
    image = pipe(
        prompt,
        num_inference_steps=steps,
        guidance_scale=guidance,
        width=width,
        height=height,
        generator=generator,
    ).images[0]
    return image


# ── 3. ControlNet for conditioned generation ──────────────────────────────────

def load_controlnet_canny_pipeline(
    base_model: str = "runwayml/stable-diffusion-v1-5",
    device:     str = "cuda",
):
    """Canny edge ControlNet — generate images matching an edge map."""
    from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
    from diffusers import UniPCMultistepScheduler

    controlnet = ControlNetModel.from_pretrained(
        "lllyasviel/control_v11p_sd15_canny",
        torch_dtype=torch.float16,
    )
    pipe = StableDiffusionControlNetPipeline.from_pretrained(
        base_model,
        controlnet=controlnet,
        torch_dtype=torch.float16,
        safety_checker=None,
    )
    pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
    pipe.enable_attention_slicing()
    return pipe.to(device)


def canny_to_control_image(image: Image.Image, low: int = 100, high: int = 200) -> Image.Image:
    """Extract Canny edges from an image for ControlNet conditioning."""
    import cv2
    import numpy as np

    img   = np.array(image)
    gray  = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    edges = cv2.Canny(gray, low, high)
    return Image.fromarray(np.stack([edges]*3, axis=-1))


def controlnet_generate(
    pipe,
    prompt:             str,
    control_image:      Image.Image,  # Edge map / depth map / pose image
    negative_prompt:    str   = "blurry, low quality",
    controlnet_scale:   float = 0.5,
    steps:              int   = 20,
    guidance_scale:     float = 7.5,
) -> Image.Image:
    return pipe(
        prompt=prompt,
        negative_prompt=negative_prompt,
        image=control_image,
        controlnet_conditioning_scale=controlnet_scale,
        num_inference_steps=steps,
        guidance_scale=guidance_scale,
    ).images[0]


# ── 4. Img2Img and Inpainting ─────────────────────────────────────────────────

def img2img(
    prompt:    str,
    init_image: Image.Image,
    strength:  float = 0.7,   # 0=keep original, 1=fully transform
    model_id:  str   = "runwayml/stable-diffusion-v1-5",
    device:    str   = "cuda",
) -> Image.Image:
    from diffusers import StableDiffusionImg2ImgPipeline

    pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
        model_id, torch_dtype=torch.float16, safety_checker=None
    ).to(device)
    init_image = init_image.resize((512, 512))
    return pipe(prompt=prompt, image=init_image, strength=strength).images[0]


def inpaint(
    prompt:     str,
    image:      Image.Image,
    mask:       Image.Image,  # White = inpaint, Black = keep
    model_id:   str = "runwayml/stable-diffusion-inpainting",
    device:     str = "cuda",
) -> Image.Image:
    from diffusers import StableDiffusionInpaintPipeline

    pipe = StableDiffusionInpaintPipeline.from_pretrained(
        model_id, torch_dtype=torch.float16, safety_checker=None
    ).to(device)
    image = image.resize((512, 512))
    mask  = mask.resize((512, 512))
    return pipe(prompt=prompt, image=image, mask_image=mask).images[0]


# ── 5. LoRA loading ───────────────────────────────────────────────────────────

def load_with_lora(
    base_model_id: str = "stabilityai/stable-diffusion-xl-base-1.0",
    lora_paths:    list[str] = [],     # safetensors files or Hub IDs
    lora_weights:  list[float] = [],   # Blending weights (0-1)
    device:        str = "cuda",
):
    """Load SDXL pipeline and stack multiple LoRA adapters."""
    from diffusers import StableDiffusionXLPipeline

    pipe = StableDiffusionXLPipeline.from_pretrained(
        base_model_id, torch_dtype=torch.float16
    ).to(device)

    adapter_names = []
    for i, (lora_path, weight) in enumerate(zip(lora_paths, lora_weights)):
        name = f"adapter_{i}"
        pipe.load_lora_weights(lora_path, adapter_name=name)
        adapter_names.append(name)

    if adapter_names:
        pipe.set_adapters(adapter_names, adapter_weights=lora_weights)

    return pipe


# ── 6. Batch generation utility ──────────────────────────────────────────────

def generate_batch(
    pipe,
    prompts:    list[str],
    output_dir: str = "outputs",
    **kwargs,
) -> list[Path]:
    """Generate images for a list of prompts and save to disk."""
    out = Path(output_dir)
    out.mkdir(parents=True, exist_ok=True)
    saved: list[Path] = []

    for i, prompt in enumerate(prompts):
        images = pipe(prompt, **kwargs).images
        for j, image in enumerate(images):
            fn = out / f"img_{i:04d}_{j}.png"
            image.save(fn)
            saved.append(fn)
        print(f"[{i+1}/{len(prompts)}] {prompt[:50]}... → {len(images)} image(s)")

    return saved


if __name__ == "__main__":
    # SDXL example
    pipe = load_sdxl_pipeline()
    images = generate_sdxl(
        pipe,
        prompt="A photorealistic red fox sitting in a snowy forest at dawn",
        num_images=1,
        seed=42,
    )
    images[0].save("fox_sdxl.png")
    print("Saved: fox_sdxl.png")

For the Stable Diffusion WebUI (AUTOMATIC1111) alternative when wanting a browser-based GUI with extensions, ControlNet, and LoRA loading without writing Python — A1111/ComfyUI provide no-code interfaces while Diffusers gives programmatic API access for building production pipelines, batch generation, custom schedulers, and composable diffusion workflows that can’t be automated through a GUI. For the OpenAI DALL-E API alternative when prioritizing simplicity, safety filtering, and not owning a GPU — DALL-E charges per image and sends data to OpenAI while Diffusers runs entirely locally at zero per-image cost with open-weight models that support fine-tuning for custom styles and domain-specific subjects via DreamBooth and LoRA. The Claude Skills 360 bundle includes Diffusers skill sets covering SDXL and FLUX pipelines, ControlNet conditioning, LoRA adapter loading, inpainting, DreamBooth fine-tuning, and batch generation utilities. Start with the free tier to try image generation code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free