Claude Code for Polars: Fast DataFrames in Python — Claude Skills 360 Blog
Blog / Data / Claude Code for Polars: Fast DataFrames in Python
Data

Claude Code for Polars: Fast DataFrames in Python

Published: December 17, 2026
Read time: 9 min read
By: Claude Skills 360

Polars is a fast DataFrame library built on Apache Arrow and Rust. Its lazy API builds a query plan that gets optimized before execution — predicate pushdown, projection pushdown, and parallel execution happen automatically. The expression syntax composes without intermediate copies. Scanning Parquet or CSV directly with scan_parquet / scan_csv streams data without loading everything into memory. Claude Code generates Polars expressions, lazy pipelines, join patterns, window functions, and the equivalent rewrites for common pandas code that runs 10-100x faster in Polars.

CLAUDE.md for Polars Projects

## Polars Stack
- Version: polars >= 1.0 (stable API, avoid 0.x patterns)
- Use LazyFrame (scan_*) for files > 1GB — eager only for small in-memory data
- Expression API: pl.col(), pl.lit(), .alias() — no df["column"] indexing
- Avoid: .apply() / map_elements() — use built-in expressions instead
- File formats: Parquet preferred (columnar, predicate pushdown), CSV for exchange
- DuckDB integration: polars.read_database() for SQL queries on DataFrames
- Typing: all operations return Polars types — never mix with pandas

Lazy API and Scan

import polars as pl
from pathlib import Path

# Lazy scan — no data loaded yet
orders_lazy = pl.scan_parquet("data/orders/*.parquet")
customers_lazy = pl.scan_csv("data/customers.csv")

# Build query plan (not executed yet)
result = (
    orders_lazy
    .filter(
        (pl.col("created_at") >= pl.lit("2026-01-01").str.to_date())
        & (pl.col("status") == "delivered")
        & (pl.col("total_cents") > 1000)
    )
    .join(
        customers_lazy.select(["id", "name", "email", "tier"]),
        left_on="customer_id",
        right_on="id",
        how="left",
    )
    .group_by("tier")
    .agg([
        pl.len().alias("order_count"),
        pl.col("total_cents").sum().alias("total_revenue_cents"),
        pl.col("total_cents").mean().alias("avg_order_cents"),
        pl.col("customer_id").n_unique().alias("unique_customers"),
    ])
    .sort("total_revenue_cents", descending=True)
)

# Execute — Polars applies predicate pushdown, projection pruning, parallelism
df = result.collect()
print(df)

Expression Syntax

import polars as pl
from datetime import date

df = pl.read_parquet("data/orders.parquet")

# Multiple transformations in one pass — no intermediate copies
enriched = df.with_columns([
    # Arithmetic
    (pl.col("total_cents") / 100).alias("total_dollars"),

    # String operations
    pl.col("status").str.to_uppercase().alias("status_upper"),
    pl.col("email").str.split("@").list.get(1).alias("email_domain"),

    # Date operations
    pl.col("created_at").dt.date().alias("order_date"),
    pl.col("created_at").dt.month().alias("order_month"),
    pl.col("created_at").dt.weekday().alias("weekday"),  # 0=Mon, 6=Sun

    # Conditional (equivalent to CASE WHEN)
    pl.when(pl.col("total_cents") >= 10000)
      .then(pl.lit("high_value"))
      .when(pl.col("total_cents") >= 5000)
      .then(pl.lit("medium_value"))
      .otherwise(pl.lit("standard"))
      .alias("customer_tier"),

    # Boolean expressions
    pl.col("status").is_in(["delivered", "shipped"]).alias("is_active"),
    pl.col("notes").is_not_null().alias("has_notes"),

    # Null handling
    pl.col("discount_cents").fill_null(0).alias("discount_cents_filled"),
    pl.col("promo_code").fill_null("NONE").alias("promo_code_clean"),
])

# Filter with expressions
high_value = enriched.filter(
    (pl.col("customer_tier") == "high_value")
    & (pl.col("is_active"))
    & (pl.col("order_date") >= date(2026, 1, 1))
)

Group By and Aggregations

# Multiple aggregations — all computed in one pass
summary = df.group_by(["status", "customer_tier"]).agg([
    pl.len().alias("count"),
    pl.col("total_cents").sum().alias("revenue_cents"),
    pl.col("total_cents").mean().alias("avg_cents"),
    pl.col("total_cents").median().alias("median_cents"),
    pl.col("total_cents").std().alias("std_cents"),
    pl.col("total_cents").quantile(0.95).alias("p95_cents"),
    pl.col("customer_id").n_unique().alias("unique_customers"),
    pl.col("created_at").min().alias("first_order"),
    pl.col("created_at").max().alias("last_order"),
    # Collect items into list
    pl.col("promo_code").drop_nulls().unique().alias("promo_codes_used"),
])

# Group by multiple keys with rollup-style analysis
monthly = (
    df
    .with_columns([
        pl.col("created_at").dt.truncate("1mo").alias("month"),
    ])
    .group_by("month")
    .agg([
        pl.len().alias("orders"),
        pl.col("total_cents").sum().alias("revenue"),
        pl.col("total_cents").mean().alias("aov"),  # Average order value
    ])
    .sort("month")
    .with_columns([
        # Month-over-month growth
        (pl.col("revenue") / pl.col("revenue").shift(1) - 1).alias("mom_growth"),
        # Rolling 3-month average
        pl.col("revenue").rolling_mean(window_size=3).alias("revenue_3mo_avg"),
    ])
)

Joins

# Left join — preserve all orders, add customer info
orders_with_customers = orders.join(
    customers.select(["id", "name", "email", "signup_date"]),
    left_on="customer_id",
    right_on="id",
    how="left",
)

# Inner join — only orders with matching inventory records
orders_with_inventory = orders.join(
    inventory,
    on="product_id",
    how="inner",
)

# Anti join — orders without corresponding payment record
unpaid_orders = orders.join(
    payments.select("order_id"),
    on="order_id",
    how="anti",
)

# Asof join — join on nearest date (e.g., find exchange rate on order date)
orders_with_fx = orders.join_asof(
    fx_rates.sort("rate_date"),
    left_on="order_date",
    right_on="rate_date",
    strategy="backward",  # Use rate from last available date
    by="currency_code",
)

# Cross join (cartesian) — use sparingly
combinations = products.join(regions, how="cross")

Window Functions

# Running totals and ranking within groups
with_window = df.with_columns([
    # Rank within customer (1 = most recent)
    pl.col("created_at")
      .rank(method="ordinal", descending=True)
      .over("customer_id")
      .alias("order_rank"),

    # Cumulative sum within customer
    pl.col("total_cents")
      .cum_sum()
      .over("customer_id")
      .alias("lifetime_spend"),

    # Row number (for deduplication)
    pl.int_range(pl.len())
      .over("customer_id")
      .alias("row_num"),

    # Lag/lead for time-series analysis
    pl.col("total_cents")
      .shift(1)
      .over("customer_id")
      .alias("prev_order_cents"),

    # Rolling sum over last 30 days per customer (requires sort first)
    pl.col("total_cents")
      .rolling_sum(window_size=30)
      .over("customer_id")
      .alias("rolling_30d_spend"),
])

# Keep only the most recent order per customer
latest_order = (
    df
    .sort("created_at", descending=True)
    .with_columns(
        pl.int_range(pl.len()).over("customer_id").alias("rn")
    )
    .filter(pl.col("rn") == 0)
    .drop("rn")
)

Replacing Pandas Patterns

# ---- PANDAS ANTI-PATTERNS → POLARS EQUIVALENTS ----

# ❌ Pandas: iterrows() loop (slow)
# for _, row in df.iterrows():
#     result.append(compute(row["a"], row["b"]))

# ✓ Polars: vectorized expression
result = df.with_columns(
    (pl.col("a") * pl.col("b") + pl.col("c")).alias("result")
)

# ❌ Pandas: apply with lambda
# df["category"] = df["total"].apply(lambda x: "high" if x > 100 else "low")

# ✓ Polars: when/then/otherwise
df = df.with_columns(
    pl.when(pl.col("total") > 100).then(pl.lit("high")).otherwise(pl.lit("low")).alias("category")
)

# ❌ Pandas: multiple merge/join calls
# df = df.merge(a, on="id").merge(b, on="id").merge(c, on="id")

# ✓ Polars: chain joins (lazy evaluates all at once)
df = (
    pl.scan_parquet("orders.parquet")
    .join(pl.scan_parquet("customers.parquet"), left_on="customer_id", right_on="id")
    .join(pl.scan_parquet("products.parquet"), on="product_id")
    .collect()
)

# ❌ Pandas: chained string operations
# df["clean"] = df["email"].str.lower().str.strip().str.split("@").str[0]

# ✓ Polars: single expression
df = df.with_columns(
    pl.col("email").str.to_lowercase().str.strip_chars().str.split("@").list.get(0).alias("username")
)

# ❌ Pandas: value_counts() for frequency
# counts = df["status"].value_counts().reset_index()

# ✓ Polars: group_by + agg
counts = df.group_by("status").agg(pl.len().alias("count")).sort("count", descending=True)

Streaming Large Datasets

# Process files larger than RAM using streaming
result = (
    pl.scan_parquet("data/events/*.parquet")  # Could be 100GB+
    .filter(pl.col("event_type") == "purchase")
    .group_by("user_id")
    .agg([
        pl.len().alias("purchase_count"),
        pl.col("amount").sum().alias("total_spent"),
    ])
    .collect(streaming=True)  # Processes in chunks, never loads all data
)

# Write results incrementally
(
    pl.scan_parquet("data/raw/*.parquet")
    .filter(pl.col("date") >= pl.lit("2026-01-01").str.to_date())
    .group_by("category")
    .agg(pl.col("revenue").sum())
    .sink_parquet("data/summary/revenue_by_category.parquet")  # Streaming write
)

DuckDB Integration

import polars as pl
import duckdb

# Query Polars DataFrame with SQL via DuckDB
df = pl.read_parquet("orders.parquet")

# Register as virtual table and query
result = duckdb.sql("""
    SELECT 
        DATE_TRUNC('month', created_at) AS month,
        status,
        COUNT(*) AS order_count,
        SUM(total_cents) / 100.0 AS revenue
    FROM df
    WHERE created_at >= '2026-01-01'
    GROUP BY 1, 2
    ORDER BY 1, 2
""").pl()  # Returns Polars DataFrame

# Or scan Parquet directly with DuckDB (often fastest for complex SQL)
result = duckdb.sql("""
    SELECT * FROM read_parquet('data/orders/*.parquet')
    WHERE total_cents > 10000
""").pl()

For the data pipeline that ingests data into Parquet files for Polars to process, see the data pipelines guide for ETL patterns with dbt and Dagster. For the ClickHouse analytics database that Polars integrates with for OLAP queries, the ClickHouse guide covers columnar time-series analytics. The Claude Skills 360 bundle includes Polars skill sets covering lazy API, expression syntax, and pandas migration patterns. Start with the free tier to try DataFrame transformation generation.

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free