Claude Code for PyArrow: Columnar Data Processing — Claude Skills 360 Blog
Blog / AI / Claude Code for PyArrow: Columnar Data Processing
AI

Claude Code for PyArrow: Columnar Data Processing

Published: November 18, 2027
Read time: 5 min read
By: Claude Skills 360

PyArrow is the Python interface to Apache Arrow — a columnar in-memory format. pip install pyarrow. import pyarrow as pa, import pyarrow.compute as pc, import pyarrow.parquet as pq. Table from dict: table = pa.table({"col": [1,2,3], "name": ["a","b","c"]}). From pandas: table = pa.Table.from_pandas(df). To pandas: df = table.to_pandas(). Schema: pa.schema([("id", pa.int64()), ("name", pa.string()), ("val", pa.float32())]). Cast: pc.cast(table["col"], pa.float64()). Filter: table.filter(pc.greater(table["val"], 0.5)). Sort: table.sort_by([("col", "descending")]). Select: table.select(["a", "b"]). Compute: pc.mean(table["score"]), pc.sum(col), pc.count(col, mode="only_valid"), pc.utf8_lower(str_col). Parquet write: pq.write_table(table, "data.parquet", compression="snappy", row_group_size=50000). Parquet read: table = pq.read_table("data.parquet", columns=["a","b"]). Predicate pushdown: pq.read_table("data.parquet", filters=[("year","=",2024),("score",">",0.5)]). Parquet dataset: ds = pq.ParquetDataset("s3://bucket/path", filters=...). CSV: table = pa.csv.read_csv("data.csv", read_options=pa.csv.ReadOptions(block_size=2**24)). IPC: sink = pa.BufferOutputStream(), writer = pa.ipc.new_stream(sink, table.schema), writer.write_table(table). Dataset scanner: import pyarrow.dataset as ds, dataset = ds.dataset("path/", format="parquet", partitioning="hive"), scanner = dataset.scanner(columns=["a"], filter=pc.greater(ds.field("score"), 0.5)). Claude Code generates PyArrow ETL pipelines, Parquet converters, predicate pushdown readers, and Arrow IPC streaming services.

CLAUDE.md for PyArrow

## PyArrow Stack
- Version: pyarrow >= 15.0
- Table: pa.table(dict) | pa.Table.from_pandas(df) | table.to_pandas()
- Compute: import pyarrow.compute as pc | pc.filter/cast/sort/mean/sum/utf8_*
- Parquet: pq.read_table(path, columns, filters) | pq.write_table(table, path)
- Dataset: pyarrow.dataset.dataset(path, format="parquet", partitioning="hive")
- CSV: pa.csv.read_csv(path, ReadOptions, ConvertOptions, ParseOptions)
- IPC: pa.ipc.new_stream(sink, schema) | write_table | .close()
- Schema: pa.schema([("col", pa.int64()), ...]) for explicit type control

PyArrow Columnar Data Pipeline

# data/pyarrow_pipeline.py — columnar data processing with PyArrow
from __future__ import annotations
import numpy as np
import pandas as pd
from pathlib import Path
from typing import Optional

import pyarrow as pa
import pyarrow.compute as pc
import pyarrow.parquet as pq
import pyarrow.csv as pa_csv
import pyarrow.ipc as pa_ipc
import pyarrow.dataset as pa_ds


# ── 1. Table construction and schema ─────────────────────────────────────────

def make_table(
    data: dict | pd.DataFrame,
    schema: pa.Schema = None,
) -> pa.Table:
    """
    Create an Arrow Table from a dict or Pandas DataFrame.
    Pass schema to enforce types (avoids implicit float64 for int columns).
    """
    if isinstance(data, pd.DataFrame):
        return pa.Table.from_pandas(data, schema=schema, preserve_index=False)
    return pa.table(data, schema=schema)


def infer_schema(df: pd.DataFrame) -> pa.Schema:
    """Infer Arrow schema from a Pandas DataFrame."""
    return pa.Schema.from_pandas(df, preserve_index=False)


def cast_columns(
    table:   pa.Table,
    casts:   dict,     # {"col": pa.target_type}
) -> pa.Table:
    """
    Cast multiple columns to target Arrow types.
    Example: cast_columns(t, {"price": pa.float32(), "id": pa.int32()})
    """
    for col_name, target_type in casts.items():
        idx  = table.schema.get_field_index(col_name)
        col  = pc.cast(table.column(col_name), target_type)
        table = table.set_column(idx, col_name, col)
    return table


# ── 2. Compute operations ─────────────────────────────────────────────────────

def filter_table(
    table:      pa.Table,
    conditions: list[tuple],
) -> pa.Table:
    """
    Filter a table with a list of (column, op, value) conditions ANDed together.
    op: ">" | "<" | ">=" | "<=" | "==" | "!=" | "in" | "not_in"
    Example: filter_table(t, [("score", ">", 0.5), ("region", "==", "west")])
    """
    op_map = {
        ">":      pc.greater,
        "<":      pc.less,
        ">=":     pc.greater_equal,
        "<=":     pc.less_equal,
        "==":     pc.equal,
        "!=":     pc.not_equal,
    }
    mask = None
    for col_name, op, value in conditions:
        col = table.column(col_name)
        if op == "in":
            cond = pc.is_in(col, value_set=pa.array(value))
        elif op == "not_in":
            cond = pc.invert(pc.is_in(col, value_set=pa.array(value)))
        else:
            cond = op_map[op](col, value)
        mask = cond if mask is None else pc.and_(mask, cond)
    return table.filter(mask)


def aggregate_table(
    table:      pa.Table,
    group_cols: list[str],
    agg_spec:   list[tuple],
) -> pd.DataFrame:
    """
    GroupBy aggregation using Arrow compute.
    agg_spec: [("col", "mean"), ("col2", "sum"), ("col", "count")]
    Returns Pandas DataFrame for convenience.
    """
    return (
        table.group_by(group_cols)
        .aggregate([(col, fn) for col, fn in agg_spec])
        .to_pandas()
        .sort_values(group_cols)
        .reset_index(drop=True)
    )


def sort_and_rank(
    table:      pa.Table,
    sort_keys:  list[tuple],  # [("col", "ascending"), ("col2", "descending")]
) -> pa.Table:
    """Sort table by multiple columns."""
    return table.sort_by(sort_keys)


def add_computed_column(
    table:   pa.Table,
    name:    str,
    expr,    # Result of a pc.* call
) -> pa.Table:
    """Append a computed column to the table."""
    return table.append_column(name, expr)


def string_operations(
    table:   pa.Table,
    col:     str,
) -> pa.Table:
    """Example: lowercase, strip whitespace, extract substring."""
    lowered = pc.utf8_lower(table.column(col))
    stripped = pc.utf8_strip_whitespace(lowered)
    return table.set_column(
        table.schema.get_field_index(col), col, stripped
    )


# ── 3. Parquet I/O ────────────────────────────────────────────────────────────

def write_parquet(
    table:          pa.Table,
    path:           str,
    compression:    str = "snappy",        # "snappy" | "zstd" | "gzip" | None
    row_group_size: int = 100_000,
    partition_by:   list[str] = None,
) -> str:
    """
    Write Arrow Table to Parquet.
    partition_by: write Hive-partitioned dataset (e.g., ["year", "region"])
    """
    Path(path).parent.mkdir(parents=True, exist_ok=True)
    if partition_by:
        pa_ds.write_dataset(
            table, path, format="parquet",
            partitioning=pa_ds.partitioning(
                pa.schema([(col, table.schema.field(col).type) for col in partition_by]),
                flavor="hive",
            ),
            existing_data_behavior="overwrite_or_ignore",
        )
    else:
        pq.write_table(
            table, path,
            compression=compression,
            row_group_size=row_group_size,
        )
    return path


def read_parquet(
    path:        str,
    columns:     list[str] = None,
    filters:     list[tuple] = None,
    batch_size:  int = None,
) -> pa.Table | "generator":
    """
    Read Parquet with optional column pruning and predicate pushdown.
    filters use DNF form: [("col", "op", val), ...] or [[...], [...]] for OR.
    batch_size: if set, returns a generator of RecordBatches instead.
    """
    kwargs = {}
    if columns:
        kwargs["columns"] = columns
    if filters:
        kwargs["filters"] = filters

    if batch_size:
        pf = pq.ParquetFile(path)
        return pf.iter_batches(batch_size=batch_size, **kwargs)
    return pq.read_table(path, **kwargs)


def scan_parquet_dataset(
    path:    str,
    filter_expr = None,
    columns: list[str] = None,
) -> pa.Table:
    """
    Scan a Hive-partitioned Parquet dataset efficiently.
    Discovers partitions automatically from directory structure.
    Example path: "data/year=2024/" or "s3://bucket/data/"
    """
    dataset = pa_ds.dataset(path, format="parquet", partitioning="hive")
    scanner = dataset.scanner(
        columns=columns,
        filter=filter_expr,     # pa_ds.field("score") > 0.5
    )
    return scanner.to_table()


# ── 4. CSV fast I/O ───────────────────────────────────────────────────────────

def read_csv_fast(
    path:           str,
    columns:        list[str] = None,
    column_types:   dict = None,
    null_values:    list[str] = None,
    block_size_mb:  int = 32,
) -> pa.Table:
    """
    Fast CSV reading with Arrow (significantly faster than pandas.read_csv).
    block_size_mb: read in chunks of this size for large files.
    """
    read_options    = pa_csv.ReadOptions(block_size=block_size_mb * 1024 * 1024)
    convert_options = pa_csv.ConvertOptions(
        include_columns=columns,
        column_types={k: v for k, v in (column_types or {}).items()},
        null_values=null_values or ["", "NA", "null", "NaN", "None"],
        true_values=["True", "true", "1"],
        false_values=["False", "false", "0"],
    )
    return pa_csv.read_csv(
        path,
        read_options=read_options,
        convert_options=convert_options,
    )


# ── 5. IPC streaming ──────────────────────────────────────────────────────────

def table_to_ipc_bytes(table: pa.Table) -> bytes:
    """Serialize an Arrow Table to IPC stream bytes (for network/cache transfer)."""
    sink   = pa.BufferOutputStream()
    writer = pa_ipc.new_stream(sink, table.schema)
    writer.write_table(table)
    writer.close()
    return sink.getvalue().to_pybytes()


def ipc_bytes_to_table(data: bytes) -> pa.Table:
    """Deserialize IPC stream bytes back to an Arrow Table."""
    buf    = pa.py_buffer(data)
    reader = pa_ipc.open_stream(buf)
    return reader.read_all()


def stream_table_in_batches(
    table:      pa.Table,
    batch_size: int,
):
    """Yield Arrow RecordBatches from a table for streaming processing."""
    for i in range(0, len(table), batch_size):
        yield table.slice(i, batch_size)


# ── 6. Schema evolution and merging ──────────────────────────────────────────

def unify_schemas(tables: list[pa.Table]) -> pa.Table:
    """
    Concatenate tables with potentially different schemas.
    Missing columns are filled with null values.
    """
    return pa.concat_tables(tables, promote_options="default")


def rename_columns(table: pa.Table, renames: dict) -> pa.Table:
    """Rename columns via schema replacement."""
    new_names = [renames.get(name, name) for name in table.schema.names]
    return table.rename_columns(new_names)


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    import tempfile, os, time
    print("PyArrow Columnar Data Demo")
    print("=" * 50)

    # Build a sample dataset
    np.random.seed(42)
    n = 1_000_000
    df = pd.DataFrame({
        "id":     np.arange(n, dtype=np.int32),
        "region": np.random.choice(["north", "south", "east", "west"], n),
        "score":  np.random.rand(n).astype(np.float32),
        "value":  np.random.exponential(100, n).astype(np.float32),
        "year":   np.random.choice([2022, 2023, 2024], n).astype(np.int16),
    })

    print(f"\nDataFrame: {n:,} rows")

    # Pandas CSV baseline
    with tempfile.TemporaryDirectory() as tmpdir:
        csv_path = os.path.join(tmpdir, "data.csv")
        df.to_csv(csv_path, index=False)

        t0 = time.perf_counter()
        df_pd = pd.read_csv(csv_path)
        pd_ms = (time.perf_counter() - t0) * 1000

        t0 = time.perf_counter()
        table = read_csv_fast(csv_path)
        arr_ms = (time.perf_counter() - t0) * 1000

        print(f"\nCSV read — pandas: {pd_ms:.1f} ms, Arrow: {arr_ms:.1f} ms  ({pd_ms/arr_ms:.1f}x speedup)")

        # Write Parquet
        pq_path = os.path.join(tmpdir, "data.parquet")
        t0 = time.perf_counter()
        write_parquet(table, pq_path)
        write_ms = (time.perf_counter() - t0) * 1000
        size_mb  = os.path.getsize(pq_path) / 1e6
        print(f"\nParquet write: {write_ms:.1f} ms, {size_mb:.1f} MB (vs {os.path.getsize(csv_path)/1e6:.1f} MB CSV)")

        # Predicate pushdown
        t0 = time.perf_counter()
        filtered = read_parquet(pq_path, columns=["id", "score"],
                                filters=[("region", "=", "north"), ("score", ">", 0.9)])
        filter_ms = (time.perf_counter() - t0) * 1000
        print(f"\nPredicate pushdown: {len(filtered):,} rows, {filter_ms:.1f} ms")

        # Compute (group by)
        print("\nGroupBy aggregation (Arrow compute):")
        agg = aggregate_table(table, ["region"], [("score", "mean"), ("value", "sum")])
        print(agg.to_string(index=False))

        # IPC round-trip
        small = table.slice(0, 10_000)
        ipc_bytes = table_to_ipc_bytes(small)
        recovered = ipc_bytes_to_table(ipc_bytes)
        print(f"\nIPC round-trip: {len(ipc_bytes):,} bytes → {len(recovered):,} rows recovered")

For the Pandas alternative for DataFrames — Pandas uses row-oriented NumPy arrays internally, causing 5-10x slower CSV parsing and lacking Parquet predicate pushdown, while PyArrow’s columnar layout enables SIMD vectorization for aggregations, the filters parameter in pq.read_table skips entire row groups on disk before reading them, and pa.concat_tables with schema promotion merges files with different column sets without loading all data into Pandas simultaneously. For the DuckDB alternative for SQL-based analytics — DuckDB provides SQL syntax while PyArrow’s dataset.Scanner with Arrow expressions handles partition-aware scans over S3/GCS, pa.ipc.new_stream enables zero-copy Arrow Flight gRPC data transport between microservices, and pa.Table.from_pandas / to_pandas provides zero-copy conversion to every Arrow-compatible runtime (Pandas, Polars, Dask, Spark, BigQuery, Snowflake). The Claude Skills 360 bundle includes PyArrow skill sets covering table construction and schema, compute operations, filter with predicate pushdown, Parquet read/write with compression, Hive-partitioned dataset scanning, fast CSV parsing, IPC stream serialization, batch streaming, and schema merging with column promotion. Start with the free tier to try columnar data pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free