Claude Code for io: In-Memory Streams in Python — Claude Skills 360 Blog
Blog / AI / Claude Code for io: In-Memory Streams in Python
AI

Claude Code for io: In-Memory Streams in Python

Published: July 18, 2028
Read time: 5 min read
By: Claude Skills 360

Python’s io module provides in-memory stream objects compatible with file I/O. import io. StringIO: buf = io.StringIO(); buf.write("text"); buf.getvalue() — in-memory text. BytesIO: buf = io.BytesIO(initial_bytes) — in-memory bytes. getvalue: buf.getvalue() — full buffer contents without moving position. seek: buf.seek(0) — rewind; buf.seek(0, io.SEEK_END) — end. tell: buf.tell() — current position. truncate: buf.truncate(n) — resize to n bytes/chars. read: buf.read(n) — n chars/bytes; buf.read() — all. readline: buf.readline() — up to newline. readlines: buf.readlines() — list of lines. write: buf.write(s) — append at position. writelines: buf.writelines(lines). TextIOWrapper: io.TextIOWrapper(BytesIO(data), encoding="utf-8") — decode bytes as text stream. BufferedReader: io.BufferedReader(raw_io, buffer_size=io.DEFAULT_BUFFER_SIZE). BufferedWriter: wraps raw writable with buffering. RawIOBase / IOBase: base classes for custom streams. SEEK_SET=0, SEEK_CUR=1, SEEK_END=2. closed: buf.closed — True after close(). context manager: with io.StringIO() as buf: — auto-closes. Claude Code generates in-memory CSV formatters, HTTP response buffers, ZIP-in-memory builders, and test stream fixtures.

CLAUDE.md for io

## io Stack
- Stdlib: import io
- Text buffer: buf = io.StringIO(); buf.write("..."); content = buf.getvalue()
- Binary buffer: buf = io.BytesIO(); buf.write(b"..."); buf.seek(0); buf.read()
- Wrap bytes as text: io.TextIOWrapper(BytesIO(data), encoding="utf-8")
- Rewind: buf.seek(0)  — always seek(0) before reading getvalue-less
- CSV in memory: writer = csv.writer(io.StringIO()); s = writer.getvalue()

io In-Memory Stream Pipeline

# app/streamutil.py — StringIO, BytesIO, TextIOWrapper, capture, pipe
from __future__ import annotations

import contextlib
import io
import sys
from typing import Any, BinaryIO, Generator, TextIO


# ─────────────────────────────────────────────────────────────────────────────
# 1. StringIO helpers
# ─────────────────────────────────────────────────────────────────────────────

def build_text(lines: list[str], sep: str = "\n") -> str:
    """
    Concatenate lines into a string using an in-memory buffer.

    Example:
        text = build_text(["line 1", "line 2", "line 3"])
    """
    buf = io.StringIO()
    buf.write(sep.join(lines))
    return buf.getvalue()


def text_lines(text: str) -> list[str]:
    """
    Split text into lines using StringIO readline iteration.

    Example:
        lines = text_lines("a\nb\nc\n")  # ["a\n", "b\n", "c\n"]
    """
    buf = io.StringIO(text)
    return buf.readlines()


def iter_text_lines(text: str) -> list[str]:
    """
    Iterate over a string line by line as if it were a file.
    Strips trailing newlines.

    Example:
        for line in iter_text_lines(response_body):
            process(line)
    """
    buf = io.StringIO(text)
    return [line.rstrip("\n") for line in buf]


@contextlib.contextmanager
def string_buffer() -> Generator[io.StringIO, None, None]:
    """
    Context manager yielding a StringIO; auto-closes.

    Example:
        with string_buffer() as buf:
            csv.writer(buf).writerows(rows)
            result = buf.getvalue()
    """
    buf = io.StringIO()
    try:
        yield buf
    finally:
        buf.close()


# ─────────────────────────────────────────────────────────────────────────────
# 2. BytesIO helpers
# ─────────────────────────────────────────────────────────────────────────────

def concat_bytes(*chunks: bytes) -> bytes:
    """
    Concatenate byte chunks via BytesIO (avoids N intermediate copies).

    Example:
        data = concat_bytes(header, body, footer)
    """
    buf = io.BytesIO()
    for chunk in chunks:
        buf.write(chunk)
    return buf.getvalue()


def bytes_to_lines(data: bytes, encoding: str = "utf-8") -> list[str]:
    """
    Decode bytes and return lines using TextIOWrapper over BytesIO.

    Example:
        lines = bytes_to_lines(response.content)
    """
    wrapper = io.TextIOWrapper(io.BytesIO(data), encoding=encoding)
    return [line.rstrip("\n") for line in wrapper]


def encode_text(text: str, encoding: str = "utf-8") -> bytes:
    """
    Encode text to bytes via a BytesIO round-trip.
    Useful for streaming text to a binary API.

    Example:
        payload = encode_text("Hello, world!")
    """
    buf = io.BytesIO()
    wrapper = io.TextIOWrapper(buf, encoding=encoding, write_through=True)
    wrapper.write(text)
    wrapper.flush()
    return buf.getvalue()


@contextlib.contextmanager
def byte_buffer(initial: bytes = b"") -> Generator[io.BytesIO, None, None]:
    """
    Context manager yielding a BytesIO buffer; auto-closes.

    Example:
        with byte_buffer() as buf:
            PIL.Image.fromarray(arr).save(buf, format="PNG")
            png_bytes = buf.getvalue()
    """
    buf = io.BytesIO(initial)
    try:
        yield buf
    finally:
        buf.close()


# ─────────────────────────────────────────────────────────────────────────────
# 3. Stdout / stderr capture
# ─────────────────────────────────────────────────────────────────────────────

@contextlib.contextmanager
def capture_stdout() -> Generator[io.StringIO, None, None]:
    """
    Capture everything printed to stdout in a StringIO buffer.

    Example:
        with capture_stdout() as out:
            print("hello")
            print("world")
        assert out.getvalue() == "hello\nworld\n"
    """
    buf = io.StringIO()
    with contextlib.redirect_stdout(buf):
        yield buf


@contextlib.contextmanager
def capture_stderr() -> Generator[io.StringIO, None, None]:
    """
    Capture everything written to stderr.

    Example:
        with capture_stderr() as err:
            print("oops", file=sys.stderr)
        assert "oops" in err.getvalue()
    """
    buf = io.StringIO()
    with contextlib.redirect_stderr(buf):
        yield buf


@contextlib.contextmanager
def capture_output() -> Generator[dict[str, io.StringIO], None, None]:
    """
    Capture stdout and stderr simultaneously.
    Yields {"stdout": StringIO, "stderr": StringIO}.

    Example:
        with capture_output() as streams:
            run_command()
        print(streams["stdout"].getvalue())
    """
    out_buf = io.StringIO()
    err_buf = io.StringIO()
    with contextlib.redirect_stdout(out_buf), contextlib.redirect_stderr(err_buf):
        yield {"stdout": out_buf, "stderr": err_buf}


# ─────────────────────────────────────────────────────────────────────────────
# 4. CSV and JSON in-memory
# ─────────────────────────────────────────────────────────────────────────────

def rows_to_csv_bytes(
    rows: list[dict],
    fieldnames: list[str] | None = None,
    encoding: str = "utf-8",
) -> bytes:
    """
    Serialize dicts to CSV bytes without touching the filesystem.

    Example:
        data = rows_to_csv_bytes([{"name": "Alice", "score": 95}])
        response.headers["Content-Type"] = "text/csv"
        response.body = data
    """
    import csv
    text_buf = io.StringIO()
    fn = fieldnames or list(rows[0].keys()) if rows else []
    writer = csv.DictWriter(text_buf, fieldnames=fn, extrasaction="ignore")
    writer.writeheader()
    writer.writerows(rows)
    return text_buf.getvalue().encode(encoding)


def json_to_bytes(obj: Any, indent: int | None = None) -> bytes:
    """
    Serialize to JSON bytes in-memory.

    Example:
        payload = json_to_bytes({"event": "charge.created", "amount": 4999})
    """
    import json
    buf = io.BytesIO()
    wrapper = io.TextIOWrapper(buf, encoding="utf-8", write_through=True)
    json.dump(obj, wrapper, indent=indent, default=str)
    wrapper.flush()
    return buf.getvalue()


def zip_in_memory(files: dict[str, bytes]) -> bytes:
    """
    Create a ZIP archive in memory without writing to disk.

    Example:
        zipped = zip_in_memory({
            "report.csv": csv_bytes,
            "summary.json": json_bytes,
        })
    """
    import zipfile
    buf = io.BytesIO()
    with zipfile.ZipFile(buf, "w", compression=zipfile.ZIP_DEFLATED) as zf:
        for name, data in files.items():
            zf.writestr(name, data)
    return buf.getvalue()


# ─────────────────────────────────────────────────────────────────────────────
# 5. Stream position utilities
# ─────────────────────────────────────────────────────────────────────────────

def rewind(stream: io.IOBase) -> io.IOBase:
    """Seek to position 0 and return the stream."""
    stream.seek(0)
    return stream


def stream_size(stream: io.IOBase) -> int:
    """Return total byte/char size of a seekable stream."""
    pos = stream.tell()
    stream.seek(0, io.SEEK_END)
    size = stream.tell()
    stream.seek(pos)
    return size


def read_chunks(
    stream: BinaryIO,
    chunk_size: int = 65536,
) -> list[bytes]:
    """
    Read a binary stream in fixed-size chunks.

    Example:
        for chunk in read_chunks(response.raw):
            hasher.update(chunk)
    """
    chunks = []
    while True:
        chunk = stream.read(chunk_size)
        if not chunk:
            break
        chunks.append(chunk)
    return chunks


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("=== io demo ===")

    print("\n--- build_text / iter_text_lines ---")
    text = build_text(["alpha", "beta", "gamma"])
    print(f"  built: {text!r}")
    print(f"  lines: {iter_text_lines(text)}")

    print("\n--- string_buffer (CSV in memory) ---")
    import csv
    with string_buffer() as buf:
        w = csv.writer(buf)
        w.writerows([["name", "score"], ["Alice", 95], ["Bob", 82]])
        csv_text = buf.getvalue()
    print(f"  csv:\n{csv_text}", end="")

    print("\n--- bytes_to_lines ---")
    raw = b"line one\nline two\nline three"
    print(f"  lines: {bytes_to_lines(raw)}")

    print("\n--- concat_bytes ---")
    merged = concat_bytes(b"hello", b" ", b"world")
    print(f"  merged: {merged!r}")

    print("\n--- encode_text ---")
    encoded = encode_text("Hello, \u4e16\u754c!")
    print(f"  encoded: {encoded!r}")

    print("\n--- capture_stdout ---")
    with capture_stdout() as out:
        print("captured line 1")
        print("captured line 2")
    print(f"  output: {out.getvalue()!r}")

    print("\n--- capture_output ---")
    with capture_output() as streams:
        print("stdout message")
        print("stderr message", file=sys.stderr)
    print(f"  stdout: {streams['stdout'].getvalue()!r}")
    print(f"  stderr: {streams['stderr'].getvalue()!r}")

    print("\n--- rows_to_csv_bytes ---")
    rows = [{"name": "Alice", "score": 95}, {"name": "Bob", "score": 82}]
    b = rows_to_csv_bytes(rows)
    print(f"  {len(b)} bytes: {b[:40]!r}...")

    print("\n--- json_to_bytes ---")
    from datetime import datetime, timezone
    payload = {"event": "charge.created", "ts": datetime.now(timezone.utc)}
    jb = json_to_bytes(payload, indent=None)
    print(f"  {jb!r}")

    print("\n--- zip_in_memory ---")
    zipped = zip_in_memory({"data.csv": b"a,b\n1,2", "meta.txt": b"version=1"})
    print(f"  zip: {len(zipped)} bytes")

    print("\n--- stream_size ---")
    buf = io.BytesIO(b"hello world")
    buf.seek(3)
    print(f"  size={stream_size(buf)}  pos restored to {buf.tell()}")

    print("\n=== done ===")

For the tempfile alternative — tempfile.SpooledTemporaryFile(max_size=N) uses in-memory storage (like BytesIO) for data under max_size, then spills to disk automatically when the limit is exceeded; io.BytesIO stays in memory always — use SpooledTemporaryFile for large or unknown-size streams that may exceed available RAM, io.BytesIO/io.StringIO when the data is bounded and fits comfortably in memory. For the pyarrow / fsspec alternative — pyarrow.BufferOutputStream and fsspec.implementations.memory.MemoryFileSystem provide high-performance in-memory I/O integrated with Arrow IPC, Parquet, and cloud filesystem abstractions; io.BytesIO has no Arrow integration but zero overhead — use PyArrow or fsspec streams when building data pipelines that produce Parquet or Arrow IPC in memory for S3/GCS upload, io.BytesIO for general-purpose HTTP response bodies, in-memory archives, and test fixtures. The Claude Skills 360 bundle includes io skill sets covering build_text()/text_lines()/iter_text_lines()/string_buffer() StringIO helpers, concat_bytes()/bytes_to_lines()/encode_text()/byte_buffer() BytesIO helpers, capture_stdout()/capture_stderr()/capture_output() stream capture, rows_to_csv_bytes()/json_to_bytes()/zip_in_memory() serialization helpers, and rewind()/stream_size()/read_chunks() stream position utilities. Start with the free tier to try in-memory streaming and io pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free