Claude Code for pipes: Python Shell Pipeline Constructor — Claude Skills 360 Blog
Blog / AI / Claude Code for pipes: Python Shell Pipeline Constructor
AI

Claude Code for pipes: Python Shell Pipeline Constructor

Published: November 2, 2028
Read time: 5 min read
By: Claude Skills 360

Python’s pipes module (Unix only) constructs shell command pipelines programmatically. import pipes. Create template: t = pipes.Template(), then .append(cmd, kind) where cmd is a shell string with $IN and $OUT placeholders and kind is one of 'f' (filter: reads stdin, writes stdout), '--' (source: no stdin, reads no file), '-f' (reads file $IN, writes stdout), 'f-' (reads stdin, writes $OUT), or 'ff' (reads $IN, writes $OUT). Run: t.open(infile, mode) returns a file-like object; mode 'r' reads from the pipeline, 'w' writes into it. Clone: t2 = t.clone(). Reset: t.reset(). Quote: pipes.quote(filename) — safely shell-escapes a filename (same as shlex.quote). pipes.STDIN, pipes.STDOUT — sentinel strings. Example: append 'gzip -c $IN > $OUT', 'ff' to compress. Note: pipes is deprecated in Python 3.11 and removed in 3.13 — always include a try/except ImportError guard; subprocess.Popen chains are the recommended replacement. Claude Code generates composable text pipelines, filter chains, multi-file transformers, and shell command constructors.

CLAUDE.md for pipes

## pipes Stack
- Stdlib: import pipes  (deprecated 3.11, removed 3.13 — guard with try/except)
- Create: t = pipes.Template()
- Append: t.append("gzip -c $IN > $OUT", "ff")   # file→file
-         t.append("wc -l", "f")                  # filter stdin→stdout
- Run:    f = t.open("input.txt", "r")             # read output of pipeline
-         f = t.open("output.txt", "w")            # write into pipeline
- Quote:  pipes.quote(filename)   # same as shlex.quote()
- Note:   kind flags — 'f'=filter  'ff'=file-to-file  '-f'=read-file  'f-'=write-file

pipes Shell Pipeline Constructor Pipeline

# app/pipesutil.py — template builder, subprocess fallback, composable chains
from __future__ import annotations

import io
import os
import platform
import shlex
import subprocess
import sys
import tempfile
from dataclasses import dataclass, field
from pathlib import Path

_PIPES_AVAILABLE = platform.system() != "Windows"

# Guard for Python 3.13+ where pipes is removed
try:
    import pipes as _pipes
    _PIPES_MODULE_AVAILABLE = True
except ImportError:
    _PIPES_MODULE_AVAILABLE = False


# ─────────────────────────────────────────────────────────────────────────────
# 1. Shell-safe quoting (works without pipes module)
# ─────────────────────────────────────────────────────────────────────────────

def shell_quote(s: str) -> str:
    """
    Return a shell-safe quoted version of a string.
    Uses shlex.quote (same implementation as pipes.quote).

    Example:
        cmd = f"cat {shell_quote(filename)} | wc -l"
    """
    return shlex.quote(s)


def shell_join(args: list[str]) -> str:
    """
    Join a list of arguments into a shell-safe string.

    Example:
        cmd = shell_join(["grep", "-r", "pattern", "/some path/with spaces"])
    """
    return shlex.join(args)


# ─────────────────────────────────────────────────────────────────────────────
# 2. Subprocess-based pipeline (Python 3.13 replacement for pipes.Template)
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class PipeStep:
    """A single filter step in a pipeline."""
    cmd:     str          # shell command string; use {input} and {output} for file→file
    kind:    str = "f"    # 'f'=filter, 'ff'=file-to-file

    def __str__(self) -> str:
        return f"PipeStep({self.cmd!r}, kind={self.kind!r})"


class Pipeline:
    """
    Composable shell pipeline built on subprocess.
    Drop-in conceptual replacement for pipes.Template with explicit subprocess wiring.

    Example:
        pl = Pipeline()
        pl.append("tr a-z A-Z")      # uppercase filter
        pl.append("grep -v '^$'")    # remove blank lines
        result = pl.run_text("hello world\n\ngoodbye")
    """

    def __init__(self) -> None:
        self._steps: list[PipeStep] = []

    def append(self, cmd: str, kind: str = "f") -> "Pipeline":
        """Add a filter step to the end of the pipeline. Returns self for chaining."""
        self._steps.append(PipeStep(cmd=cmd, kind=kind))
        return self

    def prepend(self, cmd: str, kind: str = "f") -> "Pipeline":
        """Insert a filter step at the beginning. Returns self for chaining."""
        self._steps.insert(0, PipeStep(cmd=cmd, kind=kind))
        return self

    def clone(self) -> "Pipeline":
        """Return a shallow copy of this pipeline."""
        new = Pipeline()
        new._steps = list(self._steps)
        return new

    def reset(self) -> "Pipeline":
        """Remove all steps."""
        self._steps.clear()
        return self

    def run_bytes(self, input_data: bytes = b"") -> bytes:
        """
        Run the pipeline, feeding input_data to stdin.
        Returns stdout as bytes. Raises subprocess.CalledProcessError on failure.

        Example:
            out = Pipeline().append("gzip").run_bytes(b"hello world")
        """
        if not _PIPES_AVAILABLE:
            raise OSError("Pipelines require Unix (subprocess PIPE chain)")

        steps = [s for s in self._steps if s.kind == "f"]
        if not steps:
            return input_data

        # Build process chain
        procs = []
        prev_stdout = subprocess.PIPE
        for i, step in enumerate(steps):
            proc = subprocess.Popen(
                step.cmd, shell=True,
                stdin=subprocess.PIPE if i == 0 else procs[-1].stdout,
                stdout=subprocess.PIPE,
                stderr=None,
            )
            procs.append(proc)

        # Write input to first process
        stdout, _ = procs[-1].communicate()
        procs[0].stdin.write(input_data)
        procs[0].stdin.close()

        # Wait for all processes
        for proc in procs[:-1]:
            proc.wait()
        procs[-1].wait()

        # Re-run with communicate for cleaner control
        return _run_filter_chain(steps, input_data)

    def run_text(self, text: str, encoding: str = "utf-8") -> str:
        """
        Run the pipeline with text input. Returns text output.

        Example:
            result = Pipeline().append("tr a-z A-Z").run_text("hello")
            # → "HELLO"
        """
        raw = self.run_bytes(text.encode(encoding))
        return raw.decode(encoding)

    def run_file_to_file(self, src: str | Path, dst: str | Path) -> None:
        """
        Run the pipeline reading from src and writing to dst.
        Chains filter-kind steps between the input and output files.

        Example:
            Pipeline().append("gzip").run_file_to_file("data.txt", "data.txt.gz")
        """
        src = Path(src)
        dst = Path(dst)
        steps = [s for s in self._steps if s.kind in ("f", "ff")]
        if not steps:
            import shutil
            shutil.copy2(src, dst)
            return
        with src.open("rb") as fin, dst.open("wb") as fout:
            input_data = fin.read()
        result = _run_filter_chain(steps, input_data)
        dst.write_bytes(result)

    def __repr__(self) -> str:
        return f"Pipeline({' | '.join(s.cmd for s in self._steps)})"


def _run_filter_chain(steps: list[PipeStep], input_data: bytes) -> bytes:
    """Run a chain of filter steps, threading bytes through subprocess.Popen."""
    data = input_data
    for step in steps:
        result = subprocess.run(
            step.cmd, shell=True,
            input=data,
            capture_output=True,
        )
        if result.returncode != 0:
            raise subprocess.CalledProcessError(
                result.returncode, step.cmd,
                output=result.stdout, stderr=result.stderr
            )
        data = result.stdout
    return data


# ─────────────────────────────────────────────────────────────────────────────
# 3. Wrapped pipes.Template (when available)
# ─────────────────────────────────────────────────────────────────────────────

def make_template():
    """
    Return a pipes.Template object if available, else raise ImportError.
    Use Pipeline() as first choice; this is for legacy compatibility.

    Example:
        t = make_template()
        t.append("gzip -c $IN > $OUT", "ff")
    """
    if not _PIPES_MODULE_AVAILABLE:
        raise ImportError(
            "pipes.Template not available (Python 3.13+ removed it). "
            "Use Pipeline() instead."
        )
    return _pipes.Template()


# ─────────────────────────────────────────────────────────────────────────────
# 4. Predefined useful pipelines
# ─────────────────────────────────────────────────────────────────────────────

def compress_pipeline(level: int = 6) -> Pipeline:
    """Returns a Pipeline that compresses bytes with gzip."""
    return Pipeline().append(f"gzip -{level}")


def decompress_pipeline() -> Pipeline:
    """Returns a Pipeline that decompresses gzip bytes."""
    return Pipeline().append("gunzip -c")


def word_count_pipeline() -> Pipeline:
    """Returns a Pipeline that counts words → output is 'lines words bytes\\n'."""
    return Pipeline().append("wc")


def uppercase_pipeline() -> Pipeline:
    """Returns a Pipeline that converts text to uppercase."""
    return Pipeline().append("tr a-z A-Z")


def line_count(text: str) -> int:
    """
    Count lines in text using a shell wc -l pipeline.

    Example:
        n = line_count("hello\nworld\n")
    """
    raw = Pipeline().append("wc -l").run_text(text)
    return int(raw.strip())


def grep_filter(pattern: str, invert: bool = False) -> Pipeline:
    """
    Returns a Pipeline that filters lines matching pattern.

    Example:
        out = grep_filter("ERROR").run_text(log_text)
    """
    flag = "-v " if invert else ""
    return Pipeline().append(f"grep {flag}{shell_quote(pattern)}")


def sort_unique_pipeline(reverse: bool = False) -> Pipeline:
    """Returns a Pipeline that sorts lines and removes duplicates."""
    sort_flags = "-r" if reverse else ""
    pl = Pipeline()
    pl.append(f"sort {sort_flags}")
    pl.append("uniq")
    return pl


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("=== pipes demo ===")

    if not _PIPES_AVAILABLE:
        print("  pipes not available on Windows; "
              "demonstrating shell_quote only:")
        for s in ["/normal/path", "/path with spaces/file.txt",
                  "file; rm -rf /"]:
            print(f"  {s!r}{shell_quote(s)!r}")
        raise SystemExit(0)

    # ── shell_quote ────────────────────────────────────────────────────────────
    print("\n--- shell_quote ---")
    for s in ["/normal/path", "/path with spaces/file.txt",
              "file; rm -rf /", "it's a file"]:
        print(f"  {s!r}{shell_quote(s)!r}")

    # ── basic text pipeline ────────────────────────────────────────────────────
    print("\n--- Pipeline: uppercase + remove blank lines ---")
    text = "hello world\n\nfoo bar\n\nbaz\n"
    pl = Pipeline().append("tr a-z A-Z").append("grep -v '^$'")
    print(f"  repr: {pl}")
    result = pl.run_text(text)
    print(f"  input:  {text!r}")
    print(f"  output: {result!r}")

    # ── word count ─────────────────────────────────────────────────────────────
    print("\n--- line_count ---")
    sample = "line one\nline two\nline three\n"
    print(f"  '{sample.strip()[:20]}...' → {line_count(sample)} lines")

    # ── grep filter ────────────────────────────────────────────────────────────
    print("\n--- grep_filter ---")
    log = ("INFO server started\nERROR disk full\n"
           "INFO request ok\nERROR config missing\nINFO shutdown\n")
    errors = grep_filter("ERROR").run_text(log)
    print(f"  ERROR lines:\n{errors.rstrip()}")

    # ── sort_unique ────────────────────────────────────────────────────────────
    print("\n--- sort_unique_pipeline ---")
    words = "banana\napple\ncherry\napple\nbanana\norange\n"
    unique = sort_unique_pipeline().run_text(words)
    print(f"  unique sorted: {unique.rstrip()!r}")

    # ── clone and extend ───────────────────────────────────────────────────────
    print("\n--- clone + prepend ---")
    base = grep_filter("ERROR")
    extended = base.clone().prepend("cat -n")   # add line numbers
    result2 = extended.run_text(log)
    print(f"  numbered errors:\n{result2.rstrip()}")

    # ── gzip round-trip ───────────────────────────────────────────────────────
    print("\n--- gzip round-trip ---")
    original = b"hello from the pipes demo " * 20
    compressed = compress_pipeline().run_bytes(original)
    expanded   = decompress_pipeline().run_bytes(compressed)
    print(f"  original: {len(original)}B  "
          f"compressed: {len(compressed)}B  "
          f"expanded: {len(expanded)}B  "
          f"match: {original == expanded}")

    # ── pipes.Template (if available) ─────────────────────────────────────────
    print("\n--- pipes.Template (legacy) ---")
    if _PIPES_MODULE_AVAILABLE:
        t = make_template()
        t.append("tr a-z A-Z", "f")
        with tempfile.NamedTemporaryFile(delete=False, suffix=".txt") as tmp:
            tmp.write(b"hello pipes template\n")
            tmp_path = tmp.name
        try:
            with t.open(tmp_path, "r") as f:
                out = f.read()
            print(f"  pipes.Template output: {out!r}")
        finally:
            os.unlink(tmp_path)
    else:
        print("  pipes.Template not available (Python 3.13+) — using Pipeline()")

    print("\n=== done ===")

For the subprocess alternative — subprocess.Popen(cmd, shell=True, stdin=prev.stdout, stdout=subprocess.PIPE) chains give you full control over process lifecycle, stdin/stdout/stderr routing, timeout, and error handling without the pipes.Template abstraction — use subprocess.Popen chains (or subprocess.run() for single-step commands) for all new code; the Pipeline class above wraps this pattern into the same fluid API that pipes.Template provided. For the shlex alternative — shlex.quote(s) (same implementation as pipes.quote()) and shlex.join(args) safely escape filenames and argument lists for shell commands — shlex.quote is the portable replacement for pipes.quote that remains available in Python 3.13+; use it whenever constructing shell command strings with user-supplied paths or values to prevent shell injection. The Claude Skills 360 bundle includes pipes skill sets covering shell_quote()/shell_join() safe quoting, PipeStep / Pipeline subprocess-based template replacement with run_bytes()/run_text()/run_file_to_file(), predefined compress_pipeline()/grep_filter()/sort_unique_pipeline() factories, line_count(), and make_template() legacy bridge. Start with the free tier to try shell pipeline patterns and pipes pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free