Claude Code for line_profiler: Line-by-Line Python Performance Profiling — Claude Skills 360 Blog
Blog / AI / Claude Code for line_profiler: Line-by-Line Python Performance Profiling
AI

Claude Code for line_profiler: Line-by-Line Python Performance Profiling

Published: May 18, 2028
Read time: 5 min read
By: Claude Skills 360

line_profiler profiles Python code line by line. pip install line_profiler. CLI: kernprof -l -v script.py — adds @profile decorator support, writes script.py.lprof. View: python -m line_profiler script.py.lprof. Programmatic: from line_profiler import LineProfiler; lp = LineProfiler(); lp.add_function(my_fn); lp.enable_by_count(); my_fn(args); lp.print_stats(). Decorator: lp = LineProfiler(); @lp fast_path = lp(fast_fn). Wrap: wrapped = lp(fn); wrapped(arg). Stats: lp.get_stats()LineStats with timings dict and unit (seconds). lp.print_stats(output_unit=1e-3) → ms. lp.dump_stats("out.lprof"). lp.load_stats("out.lprof"). lp.enable() / lp.disable(). Context manager: with lp: my_fn(). lp.runcall(fn, *args). Sort: default output sorts by line number. TimeUnit: print_stats(output_unit=1e-6) → µs. Jupyter: %load_ext line_profiler; %lprun -f fn fn(args). Global @profile: only works under kernprof, use LineProfiler() for library code. Claude Code generates line_profiler hot path analysis, contextmanager profiling helpers, pytest fixture profilers, and line timing reports.

CLAUDE.md for line_profiler

## line_profiler Stack
- Version: line_profiler >= 4.0 | pip install line_profiler
- CLI: kernprof -l -v script.py  (adds @profile; .lprof file created)
- View: python -m line_profiler script.py.lprof
- Programmatic: lp = LineProfiler(); lp.add_function(fn); lp.enable_by_count(); fn(); lp.print_stats()
- Output unit: lp.print_stats(output_unit=1e-3)  # milliseconds
- Jupyter: %load_ext line_profiler; %lprun -f fn fn(args)

line_profiler Hot Path Analysis Pipeline

# app/profiling.py — LineProfiler helpers, context managers, pytest fixtures, and reports
from __future__ import annotations

import io
import time
from contextlib import contextmanager
from functools import wraps
from pathlib import Path
from typing import Any, Callable

from line_profiler import LineProfiler


# ─────────────────────────────────────────────────────────────────────────────
# 1. Core profiler helpers
# ─────────────────────────────────────────────────────────────────────────────

def profile_function(
    fn: Callable,
    *args: Any,
    extra_functions: list[Callable] | None = None,
    output_unit: float = 1e-3,
    print_output: bool = True,
    **kwargs: Any,
) -> tuple[Any, LineProfiler]:
    """
    Profile a single function call line by line.
    extra_functions: also profile these callees if you want their line data.
    output_unit: 1e-3 = ms, 1e-6 = µs, 1.0 = seconds.
    Returns (result, profiler).

    Example:
        result, lp = profile_function(process_data, large_df, output_unit=1e-3)
    """
    lp = LineProfiler()
    lp.add_function(fn)
    for f in extra_functions or []:
        lp.add_function(f)

    result = lp.runcall(fn, *args, **kwargs)

    if print_output:
        lp.print_stats(output_unit=output_unit)

    return result, lp


@contextmanager
def profiling_ctx(
    *functions: Callable,
    output_unit: float = 1e-3,
    print_output: bool = True,
    dump_path: str | Path | None = None,
):
    """
    Context manager for profiling a block of code.
    Functions passed in will be profiled line by line.

    Usage:
        def compute(data): ...
        def transform(data): ...

        with profiling_ctx(compute, transform) as lp:
            result = compute(big_data)
            output = transform(result)
        # stats printed automatically on exit
    """
    lp = LineProfiler()
    for fn in functions:
        lp.add_function(fn)

    lp.enable_by_count()
    try:
        yield lp
    finally:
        lp.disable_by_count()
        if print_output:
            lp.print_stats(output_unit=output_unit)
        if dump_path:
            lp.dump_stats(str(dump_path))


def profiler_decorator(
    output_unit: float = 1e-3,
    print_output: bool = True,
    include_callees: list[Callable] | None = None,
) -> Callable:
    """
    Decorator factory for line-by-line profiling.
    Profiles the decorated function every time it is called.

    Usage:
        @profiler_decorator(output_unit=1e-6)
        def hot_function(data):
            ...
    """
    def decorator(fn: Callable) -> Callable:
        lp = LineProfiler()
        lp.add_function(fn)
        for callee in include_callees or []:
            lp.add_function(callee)

        @wraps(fn)
        def wrapper(*args: Any, **kwargs: Any) -> Any:
            result = lp.runcall(fn, *args, **kwargs)
            if print_output:
                lp.print_stats(output_unit=output_unit)
            return result

        wrapper._line_profiler = lp  # expose for inspection
        return wrapper

    return decorator


# ─────────────────────────────────────────────────────────────────────────────
# 2. Stats parsing helpers
# ─────────────────────────────────────────────────────────────────────────────

def get_hot_lines(
    lp: LineProfiler,
    top_n: int = 10,
    output_unit: float = 1e-3,
) -> list[dict]:
    """
    Extract top N slowest lines from a profiler.
    Returns list of dicts sorted by total time descending.

    Keys: file, line, func, hits, time_ms, per_hit_ms, percent, code
    """
    stats = lp.get_stats()
    unit  = stats.unit  # seconds per timer tick

    rows: list[dict] = []
    for (filename, start_lineno, func_name), timings in stats.timings.items():
        for lineno, nhits, total_time in timings:
            total_s = total_time * unit
            rows.append({
                "file":        filename,
                "line":        lineno,
                "func":        func_name,
                "hits":        nhits,
                "time_ms":     total_s * 1000 / (output_unit / 1e-3),
                "per_hit_ms":  (total_s / nhits * 1000) if nhits else 0,
                "percent":     0.0,  # filled below
                "code":        "",   # filled below if source available
            })

    if not rows:
        return []

    total_time = sum(r["time_ms"] for r in rows)
    for r in rows:
        r["percent"] = (r["time_ms"] / total_time * 100) if total_time else 0

    rows.sort(key=lambda r: r["time_ms"], reverse=True)
    return rows[:top_n]


def stats_to_string(lp: LineProfiler, output_unit: float = 1e-3) -> str:
    """Capture print_stats() output as a string."""
    buf = io.StringIO()
    lp.print_stats(stream=buf, output_unit=output_unit)
    return buf.getvalue()


def save_stats(lp: LineProfiler, path: str | Path) -> Path:
    """Save profiler stats to a .lprof file for later inspection."""
    p = Path(path)
    lp.dump_stats(str(p))
    return p


def load_stats(path: str | Path) -> LineProfiler:
    """Load .lprof stats from disk. Returns a new LineProfiler with stats loaded."""
    lp = LineProfiler()
    lp.load_stats(str(path))
    return lp


# ─────────────────────────────────────────────────────────────────────────────
# 3. Comparison helper
# ─────────────────────────────────────────────────────────────────────────────

def compare(
    baseline: Callable,
    improved: Callable,
    *args: Any,
    iterations: int = 3,
    output_unit: float = 1e-3,
    **kwargs: Any,
) -> dict:
    """
    Run baseline and improved implementations multiple times and compare.
    Returns dict with timing stats and speedup ratio.

    Example:
        from app.data import process_slow, process_fast

        report = compare(process_slow, process_fast, large_data, iterations=5)
        print(f"Speedup: {report['speedup']:.2f}x")
    """
    def time_fn(fn: Callable) -> float:
        times = []
        lp = LineProfiler()
        lp.add_function(fn)
        for _ in range(iterations):
            start = time.perf_counter()
            lp.runcall(fn, *args, **kwargs)
            times.append(time.perf_counter() - start)
        return min(times), lp

    base_time, base_lp  = time_fn(baseline)
    new_time,  new_lp   = time_fn(improved)

    speedup = base_time / new_time if new_time > 0 else float("inf")

    return {
        "baseline_name":     baseline.__name__,
        "improved_name":     improved.__name__,
        "baseline_min_ms":   base_time * 1000,
        "improved_min_ms":   new_time  * 1000,
        "speedup":           speedup,
        "baseline_profiler": base_lp,
        "improved_profiler": new_lp,
    }


# ─────────────────────────────────────────────────────────────────────────────
# 4. pytest fixture
# ─────────────────────────────────────────────────────────────────────────────

PYTEST_FIXTURE = '''
# conftest.py — line_profiler pytest fixture
import pytest
from line_profiler import LineProfiler
from app.profiling import get_hot_lines, stats_to_string

@pytest.fixture
def line_profile():
    """
    Pytest fixture that returns a LineProfiler instance.
    Usage in tests:
        def test_my_function(line_profile):
            line_profile.add_function(my_module.hot_fn)
            line_profile.enable_by_count()
            result = my_module.hot_fn(test_input)
            line_profile.disable_by_count()
            line_profile.print_stats(output_unit=1e-3)
            hot = get_hot_lines(line_profile, top_n=5)
            assert hot[0]["percent"] < 80.0  # no single line dominates
    """
    lp = LineProfiler()
    yield lp
    # Optional: print stats after test even on failure
    try:
        lp.print_stats(output_unit=1e-3)
    except Exception:
        pass
'''


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

def _slow_sum(data: list) -> float:
    """Intentionally slow example function for demonstration."""
    total = 0.0
    for i, val in enumerate(data):      # line A — loop overhead
        total += val * 1.1              # line B — multiply
        if val > 500:                   # line C — conditional
            total -= 0.01 * val         # line D — adjustment
    result = total / max(len(data), 1)  # line E — average
    return result


def _fast_sum(data: list) -> float:
    """Faster version for comparison."""
    import statistics
    return statistics.mean(data)


if __name__ == "__main__":
    data = list(range(1, 10_001))

    print("=== profile_function ===")
    result, lp = profile_function(_slow_sum, data, output_unit=1e-6, print_output=False)
    print(f"  Result: {result:.4f}")

    hot = get_hot_lines(lp, top_n=5, output_unit=1e-6)
    print(f"  Top lines:")
    for row in hot:
        print(f"    Line {row['line']}: {row['time_ms']:.2f}µs ({row['percent']:.1f}%)")

    print("\n=== profiling_ctx ===")
    with profiling_ctx(_slow_sum, output_unit=1e-3, print_output=False) as lp2:
        for _ in range(5):
            _slow_sum(data)
    print(f"  Profiled 5 calls; {len(lp2.get_stats().timings)} function(s) recorded")

    print("\n=== compare: slow vs fast ===")
    report = compare(_slow_sum, _fast_sum, data, iterations=3)
    print(f"  {report['baseline_name']}: {report['baseline_min_ms']:.2f}ms")
    print(f"  {report['improved_name']}: {report['improved_min_ms']:.2f}ms")
    print(f"  Speedup: {report['speedup']:.2f}x")

For the cProfile / pstats alternative — cProfile and pstats give cumulative function-level timing (total time per function call across all invocations) which is great for finding which functions are slow; line_profiler goes one level deeper showing per-line timing within a function, making it the right tool once you’ve already identified a hot function and need to know exactly which lines within it to optimize. For the pyinstrument alternative — pyinstrument uses statistical sampling to build a call-frame tree with minimal overhead (~0.2% vs 10–20% for instrumentation-based profilers); line_profiler instruments every line with exact counter-increments so it shows per-line hits and precise timing — use pyinstrument for exploratory profiling and line_profiler once you’ve found the hot function and need line-level surgery. The Claude Skills 360 bundle includes line_profiler skill sets covering profile_function() one-shot profiling, profiling_ctx() context manager, profiler_decorator() for persistent instrumentation, get_hot_lines() sorted result extractor, stats_to_string() for CI output capture, save_stats()/load_stats() for .lprof persistence, compare() baseline vs improved speedup analysis, pytest fixture for test-integrated profiling, and kernprof CLI workflow. Start with the free tier to try line-by-line Python performance profiling code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free