Claude Code for einops: Tensor Reshape Operations — Claude Skills 360 Blog
Blog / AI / Claude Code for einops: Tensor Reshape Operations
AI

Claude Code for einops: Tensor Reshape Operations

Published: December 3, 2027
Read time: 5 min read
By: Claude Skills 360

einops provides readable tensor operations with named axes notation. pip install einops. from einops import rearrange, reduce, repeat, pack, unpack. from einops.layers.torch import Rearrange, Reduce. Rearrange: rearrange(x, "b c h w -> b (c h w)") — flatten C,H,W. rearrange(x, "b (h h2) (w w2) -> b h w (h2 w2) ", h2=2, w2=2) — unfold patches. rearrange(x, "b c h w -> b h w c") — channel last. rearrange(x, "(b t) c -> b t c", b=batch) — unbatch. Merge: rearrange(x, "b h w c -> b (h w) c"). Split: rearrange(x, "b (n d) -> b n d", n=8). Transpose batch dims: rearrange(x, "b c h w -> c b h w"). Reduce: reduce(x, "b c h w -> b c", "mean") — global average pool. reduce(x, "b c h w -> b c 1 1", "mean") — GAP keeping dims. reduce(x, "b n d -> b d", "max") — max over sequence. Repeat: repeat(x, "b c -> b c h w", h=H, w=W) — broadcast. repeat(x, "b 1 d -> b n d", n=N) — expand sequence. Pack/Unpack: packed, ps = pack([img, seq], "b * d") — pack heterogeneous shapes. img, seq = unpack(packed, ps, "b * d"). Layer: Rearrange("b c h w -> b (h w) c") — as nn.Module. Reduce("b c h w -> b c", "mean"). EinMix: from einops.layers.torch import EinMix, EinMix("b t c -> b t d", weight_shape="c d", c=512, d=256) — learnable. Claude Code generates einops tensor transformations for vision transformers, attention mechanisms, and data pipeline reshaping.

CLAUDE.md for einops

## einops Stack
- Version: einops >= 0.7
- Core: rearrange | reduce | repeat | pack | unpack
- Backends: numpy, torch, tensorflow, jax, cupy — auto-detected
- Syntax: "input_pattern -> output_pattern" with named axes
- Merge axes: (a b) in output | Split: (a b) in input with size kwarg
- Layers: from einops.layers.torch import Rearrange, Reduce — as nn.Module
- EinMix: learnable linear projection with named axes

einops Tensor Operations Pipeline

# ml/einops_pipeline.py — readable tensor operations with einops
from __future__ import annotations
import math
from typing import Any

import numpy as np
import torch
import torch.nn as nn
from einops import rearrange, reduce, repeat, pack, unpack, asnumpy
from einops.layers.torch import Rearrange, Reduce, EinMix


# ── 0. Core operation patterns ────────────────────────────────────────────────

def demo_rearrange(x: torch.Tensor) -> dict[str, torch.Tensor]:
    """
    Demonstrate common rearrange patterns.
    Input x: (batch, channels, height, width) — standard conv feature map.
    """
    b, c, h, w = x.shape
    results = {}

    # Flatten spatial dimensions
    results["flatten_hw"]   = rearrange(x, "b c h w -> b c (h w)")
    # Flatten all non-batch
    results["flatten_all"]  = rearrange(x, "b c h w -> b (c h w)")
    # Channel last (for TensorFlow-style ops)
    results["channel_last"] = rearrange(x, "b c h w -> b h w c")
    # Transpose batch and channel
    results["swap_bc"]      = rearrange(x, "b c h w -> c b h w")
    # Add sequence dim (for transformer input)
    results["to_seq"]       = rearrange(x, "b c h w -> b (h w) c")
    # Unfold into non-overlapping patches (h2×w2)
    results["patches"]      = rearrange(x, "b c (h h2) (w w2) -> b (h w) (c h2 w2)", h2=2, w2=2)
    # Add singleton dims
    results["add_dims"]     = rearrange(x, "b c h w -> b 1 c h w")
    # Remove singleton (if c==1)
    if c == 1:
        results["squeeze_c"] = rearrange(x, "b 1 h w -> b h w")

    return results


def demo_reduce(x: torch.Tensor) -> dict[str, torch.Tensor]:
    """
    Global pooling and reduction patterns.
    Input x: (batch, channels, height, width).
    """
    return {
        # Global average pool → (b, c)
        "gap":          reduce(x, "b c h w -> b c", "mean"),
        # Global max pool → (b, c)
        "gmp":          reduce(x, "b c h w -> b c", "max"),
        # GAP preserving dims → (b, c, 1, 1) — compatible with conv output
        "gap_keepdims": reduce(x, "b c h w -> b c 1 1", "mean"),
        # Row mean → (b, c, h)
        "row_mean":     reduce(x, "b c h w -> b c h", "mean"),
        # Full spatial → (b,) scalar per sample
        "total_mean":   reduce(x, "b c h w -> b", "mean"),
    }


def demo_repeat(x: torch.Tensor, n_heads: int = 8) -> dict[str, torch.Tensor]:
    """
    Broadcasting / expansion patterns.
    Input x: 1D or 2D tensor depending on context.
    """
    b = x.shape[0]
    # Expand a (b, d) vector along a sequence dim
    vec = x[:, :64]
    return {
        # (b, d) → (b, n, d): same vector at every position
        "seq_expand": repeat(vec, "b d -> b n d", n=16),
        # (b, d) → (b, h, d): broadcast across heads
        "head_expand": repeat(vec, "b d -> b h d", h=n_heads),
        # (b, d) → (b, d, h, w): broadcast to spatial
        "spatial_expand": repeat(vec, "b d -> b d h w", h=8, w=8),
    }


# ── 1. Vision Transformer helpers ─────────────────────────────────────────────

def image_to_patches(
    images:     torch.Tensor,  # (B, C, H, W)
    patch_size: int = 16,
) -> torch.Tensor:
    """
    Patchify an image into non-overlapping patches for ViT.
    Returns: (B, num_patches, patch_dim)  where patch_dim = C * P * P
    """
    return rearrange(
        images,
        "b c (h p1) (w p2) -> b (h w) (c p1 p2)",
        p1=patch_size, p2=patch_size,
    )


def patches_to_image(
    patches:    torch.Tensor,   # (B, num_patches, patch_dim)
    H:          int,
    W:          int,
    patch_size: int = 16,
    C:          int = 3,
) -> torch.Tensor:
    """Reconstruct image from patches (reverse of image_to_patches)."""
    h = H // patch_size
    w = W // patch_size
    return rearrange(
        patches,
        "b (h w) (c p1 p2) -> b c (h p1) (w p2)",
        h=h, w=w, p1=patch_size, p2=patch_size, c=C,
    )


def multi_head_split(
    x:       torch.Tensor,   # (B, T, D)
    n_heads: int,
) -> torch.Tensor:
    """Split embedding dim into heads: (B, T, D) → (B, H, T, D/H)."""
    return rearrange(x, "b t (h d) -> b h t d", h=n_heads)


def multi_head_merge(
    x: torch.Tensor,   # (B, H, T, D/H)
) -> torch.Tensor:
    """Merge heads back: (B, H, T, D/H) → (B, T, H*D/H)."""
    return rearrange(x, "b h t d -> b t (h d)")


def compute_attention(
    q: torch.Tensor,   # (B, H, T, D)
    k: torch.Tensor,
    v: torch.Tensor,
    scale: float = None,
) -> torch.Tensor:
    """Scaled dot-product attention using einops-style notation with torch."""
    d = q.shape[-1]
    scale = scale or math.sqrt(d)
    # (B, H, T, T) attention weights
    attn = torch.einsum("bhqd, bhkd -> bhqk", q, k) / scale
    attn = attn.softmax(dim=-1)
    # (B, H, T, D)
    out  = torch.einsum("bhqk, bhkd -> bhqd", attn, v)
    return out


# ── 2. Temporal / sequence patterns ──────────────────────────────────────────

def batch_sequences(
    sequences: list[torch.Tensor],   # list of (T_i, D) tensors (variable length)
    pad_value: float = 0.0,
) -> tuple[torch.Tensor, torch.Tensor]:
    """
    Pack variable-length sequences into a padded batch.
    Returns (B, T_max, D) and (B,) length mask.
    """
    max_t = max(s.shape[0] for s in sequences)
    d     = sequences[0].shape[-1]
    batch = torch.full((len(sequences), max_t, d), pad_value, dtype=sequences[0].dtype)
    lengths = torch.zeros(len(sequences), dtype=torch.long)
    for i, seq in enumerate(sequences):
        batch[i, :len(seq)] = seq
        lengths[i] = len(seq)
    return batch, lengths


def unbatch_time(
    x:         torch.Tensor,   # (B*T, D) — flattened for efficient layer pass
    batch:     int,
) -> torch.Tensor:
    """Reshape (B*T, D) back to (B, T, D)."""
    return rearrange(x, "(b t) d -> b t d", b=batch)


def sliding_window(
    x:        torch.Tensor,   # (B, T, D)
    win_size: int,
    stride:   int = 1,
) -> torch.Tensor:
    """
    Create sliding windows along the time axis.
    Returns (B, num_windows, win_size, D).
    Non-overlapping windows only (stride==win_size) can be done via rearrange.
    For arbitrary stride, use unfold.
    """
    if stride == win_size:
        t = x.shape[1] // win_size
        return rearrange(x, "b (t w) d -> b t w d", w=win_size)
    # General case via unfold
    windows = x.unfold(1, win_size, stride)   # (B, num_windows, D, win_size)
    return rearrange(windows, "b n d w -> b n w d")


# ── 3. Pack / Unpack for mixed modalities ─────────────────────────────────────

def pack_multimodal(
    image_feats: torch.Tensor,   # (B, H*W, D) image patches
    text_feats:  torch.Tensor,   # (B, T, D) text tokens
    audio_feats: torch.Tensor,   # (B, A, D) audio frames
) -> tuple[torch.Tensor, list]:
    """
    Pack heterogeneous modality features along the sequence axis.
    Returns packed (B, H*W+T+A, D) and pattern sizes for unpack.
    """
    packed, ps = pack([image_feats, text_feats, audio_feats], "b * d")
    return packed, ps


def unpack_multimodal(
    packed: torch.Tensor,
    ps:     list,
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
    """Unpack after cross-modal attention back to original shapes."""
    img, txt, aud = unpack(packed, ps, "b * d")
    return img, txt, aud


# ── 4. PyTorch module layers ──────────────────────────────────────────────────

def build_patch_embedding(
    image_size:  int = 224,
    patch_size:  int = 16,
    in_channels: int = 3,
    embed_dim:   int = 768,
) -> nn.Sequential:
    """
    ViT patch embedding using einops Rearrange + Linear.
    Equivalent to Conv2d(kernel=patch_size, stride=patch_size) but explicit.
    """
    num_patches = (image_size // patch_size) ** 2
    patch_dim   = in_channels * patch_size * patch_size
    return nn.Sequential(
        Rearrange("b c (h p1) (w p2) -> b (h w) (c p1 p2)",
                  p1=patch_size, p2=patch_size),
        nn.LayerNorm(patch_dim),
        nn.Linear(patch_dim, embed_dim),
        nn.LayerNorm(embed_dim),
    )


def build_gap_classifier(
    feature_dim: int,
    num_classes: int,
) -> nn.Sequential:
    """Global average pool + classifier head using einops Reduce."""
    return nn.Sequential(
        Reduce("b c h w -> b c", "mean"),   # GAP
        nn.LayerNorm(feature_dim),
        nn.Linear(feature_dim, num_classes),
    )


def build_einmix_ffn(
    d_model:  int,
    d_ff:     int,
    dropout:  float = 0.1,
) -> nn.Sequential:
    """
    Feed-forward network with EinMix (named-dimension Linear).
    EinMix is identical to nn.Linear but with explicit axis names.
    """
    return nn.Sequential(
        EinMix("b t d_model -> b t d_ff",
               weight_shape="d_model d_ff",
               d_model=d_model, d_ff=d_ff),
        nn.GELU(),
        nn.Dropout(dropout),
        EinMix("b t d_ff -> b t d_model",
               weight_shape="d_ff d_model",
               d_ff=d_ff, d_model=d_model),
    )


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("einops Tensor Operations Demo")
    print("=" * 50)

    B, C, H, W = 4, 3, 32, 32
    x = torch.randn(B, C, H, W)

    # Core operations
    reshaped = demo_rearrange(x)
    for name, t in reshaped.items():
        print(f"  rearrange {name}: {tuple(t.shape)}")

    print()
    pooled = demo_reduce(x)
    for name, t in pooled.items():
        print(f"  reduce {name}: {tuple(t.shape)}")

    # ViT patchify
    print()
    imgs = torch.randn(2, 3, 64, 64)
    patches = image_to_patches(imgs, patch_size=8)
    print(f"  image_to_patches: {tuple(imgs.shape)}{tuple(patches.shape)}")
    reconstructed = patches_to_image(patches, 64, 64, patch_size=8, C=3)
    print(f"  patches_to_image: {tuple(patches.shape)}{tuple(reconstructed.shape)}")

    # Multi-head attention
    seq = torch.randn(2, 16, 128)
    q = multi_head_split(seq, n_heads=4)
    print(f"\n  multi_head_split: {tuple(seq.shape)}{tuple(q.shape)}")
    merged = multi_head_merge(q)
    print(f"  multi_head_merge: {tuple(q.shape)}{tuple(merged.shape)}")

    # Pack/unpack
    img_f  = torch.randn(2, 49, 256)
    txt_f  = torch.randn(2, 16, 256)
    aud_f  = torch.randn(2, 8, 256)
    packed, ps = pack_multimodal(img_f, txt_f, aud_f)
    print(f"\n  pack_multimodal: {tuple(packed.shape)} (49+16+8={49+16+8} tokens)")
    i2, t2, a2 = unpack_multimodal(packed, ps)
    print(f"  unpack back: img={tuple(i2.shape)}, txt={tuple(t2.shape)}, aud={tuple(a2.shape)}")

    # Layers
    patch_embed = build_patch_embedding(64, patch_size=8, embed_dim=128)
    out = patch_embed(torch.randn(2, 3, 64, 64))
    print(f"\n  patch_embedding output: {tuple(out.shape)}")

For the torch.view / torch.permute alternative — x.view(b, -1) loses all information about which dimensions were merged, requiring a mental model of the original shape at every call site, while rearrange(x, "b c h w -> b (c h w)") is self-documenting code where the pattern simultaneously specifies the input contract and the output shape, and rearrange(x, "b (h p1) (w p2) -> b (h w) (c p1 p2)", p1=16, p2=16, c=3) replaces a four-line view/permute/view chain with a single expression that a new team member can read directly. For the einsum alternative for multi-head attention — torch.einsum("bhqd,bhkd->bhqk", q, k) is powerful but rearrange(x, "b t (h d) -> b h t d", h=8) prepares the heads-split and rearrange(out, "b h t d -> b t (h d)") merges them without a manual .view() or .transpose(), making the multi-head split/merge pattern reproducible from the operation string alone, and pack([img, txt, audio], "b * d") collapses variable-length multi-modal sequences into one tensor for self-attention then unpack restores original shapes with zero padding logic. The Claude Skills 360 bundle includes einops skill sets covering rearrange flatten/unfold/transpose/split patterns, reduce GAP/GMP/keepdims, repeat broadcasting, pack/unpack for multi-modal batching, image_to_patches for ViT, multi_head_split/merge for attention, sliding_window unfold, Rearrange/Reduce PyTorch layers, EinMix FFN, and patch embedding module. Start with the free tier to try tensor reshape code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free