line_profiler profiles Python code line by line. pip install line_profiler. CLI: kernprof -l -v script.py — adds @profile decorator support, writes script.py.lprof. View: python -m line_profiler script.py.lprof. Programmatic: from line_profiler import LineProfiler; lp = LineProfiler(); lp.add_function(my_fn); lp.enable_by_count(); my_fn(args); lp.print_stats(). Decorator: lp = LineProfiler(); @lp fast_path = lp(fast_fn). Wrap: wrapped = lp(fn); wrapped(arg). Stats: lp.get_stats() → LineStats with timings dict and unit (seconds). lp.print_stats(output_unit=1e-3) → ms. lp.dump_stats("out.lprof"). lp.load_stats("out.lprof"). lp.enable() / lp.disable(). Context manager: with lp: my_fn(). lp.runcall(fn, *args). Sort: default output sorts by line number. TimeUnit: print_stats(output_unit=1e-6) → µs. Jupyter: %load_ext line_profiler; %lprun -f fn fn(args). Global @profile: only works under kernprof, use LineProfiler() for library code. Claude Code generates line_profiler hot path analysis, contextmanager profiling helpers, pytest fixture profilers, and line timing reports.
CLAUDE.md for line_profiler
## line_profiler Stack
- Version: line_profiler >= 4.0 | pip install line_profiler
- CLI: kernprof -l -v script.py (adds @profile; .lprof file created)
- View: python -m line_profiler script.py.lprof
- Programmatic: lp = LineProfiler(); lp.add_function(fn); lp.enable_by_count(); fn(); lp.print_stats()
- Output unit: lp.print_stats(output_unit=1e-3) # milliseconds
- Jupyter: %load_ext line_profiler; %lprun -f fn fn(args)
line_profiler Hot Path Analysis Pipeline
# app/profiling.py — LineProfiler helpers, context managers, pytest fixtures, and reports
from __future__ import annotations
import io
import time
from contextlib import contextmanager
from functools import wraps
from pathlib import Path
from typing import Any, Callable
from line_profiler import LineProfiler
# ─────────────────────────────────────────────────────────────────────────────
# 1. Core profiler helpers
# ─────────────────────────────────────────────────────────────────────────────
def profile_function(
fn: Callable,
*args: Any,
extra_functions: list[Callable] | None = None,
output_unit: float = 1e-3,
print_output: bool = True,
**kwargs: Any,
) -> tuple[Any, LineProfiler]:
"""
Profile a single function call line by line.
extra_functions: also profile these callees if you want their line data.
output_unit: 1e-3 = ms, 1e-6 = µs, 1.0 = seconds.
Returns (result, profiler).
Example:
result, lp = profile_function(process_data, large_df, output_unit=1e-3)
"""
lp = LineProfiler()
lp.add_function(fn)
for f in extra_functions or []:
lp.add_function(f)
result = lp.runcall(fn, *args, **kwargs)
if print_output:
lp.print_stats(output_unit=output_unit)
return result, lp
@contextmanager
def profiling_ctx(
*functions: Callable,
output_unit: float = 1e-3,
print_output: bool = True,
dump_path: str | Path | None = None,
):
"""
Context manager for profiling a block of code.
Functions passed in will be profiled line by line.
Usage:
def compute(data): ...
def transform(data): ...
with profiling_ctx(compute, transform) as lp:
result = compute(big_data)
output = transform(result)
# stats printed automatically on exit
"""
lp = LineProfiler()
for fn in functions:
lp.add_function(fn)
lp.enable_by_count()
try:
yield lp
finally:
lp.disable_by_count()
if print_output:
lp.print_stats(output_unit=output_unit)
if dump_path:
lp.dump_stats(str(dump_path))
def profiler_decorator(
output_unit: float = 1e-3,
print_output: bool = True,
include_callees: list[Callable] | None = None,
) -> Callable:
"""
Decorator factory for line-by-line profiling.
Profiles the decorated function every time it is called.
Usage:
@profiler_decorator(output_unit=1e-6)
def hot_function(data):
...
"""
def decorator(fn: Callable) -> Callable:
lp = LineProfiler()
lp.add_function(fn)
for callee in include_callees or []:
lp.add_function(callee)
@wraps(fn)
def wrapper(*args: Any, **kwargs: Any) -> Any:
result = lp.runcall(fn, *args, **kwargs)
if print_output:
lp.print_stats(output_unit=output_unit)
return result
wrapper._line_profiler = lp # expose for inspection
return wrapper
return decorator
# ─────────────────────────────────────────────────────────────────────────────
# 2. Stats parsing helpers
# ─────────────────────────────────────────────────────────────────────────────
def get_hot_lines(
lp: LineProfiler,
top_n: int = 10,
output_unit: float = 1e-3,
) -> list[dict]:
"""
Extract top N slowest lines from a profiler.
Returns list of dicts sorted by total time descending.
Keys: file, line, func, hits, time_ms, per_hit_ms, percent, code
"""
stats = lp.get_stats()
unit = stats.unit # seconds per timer tick
rows: list[dict] = []
for (filename, start_lineno, func_name), timings in stats.timings.items():
for lineno, nhits, total_time in timings:
total_s = total_time * unit
rows.append({
"file": filename,
"line": lineno,
"func": func_name,
"hits": nhits,
"time_ms": total_s * 1000 / (output_unit / 1e-3),
"per_hit_ms": (total_s / nhits * 1000) if nhits else 0,
"percent": 0.0, # filled below
"code": "", # filled below if source available
})
if not rows:
return []
total_time = sum(r["time_ms"] for r in rows)
for r in rows:
r["percent"] = (r["time_ms"] / total_time * 100) if total_time else 0
rows.sort(key=lambda r: r["time_ms"], reverse=True)
return rows[:top_n]
def stats_to_string(lp: LineProfiler, output_unit: float = 1e-3) -> str:
"""Capture print_stats() output as a string."""
buf = io.StringIO()
lp.print_stats(stream=buf, output_unit=output_unit)
return buf.getvalue()
def save_stats(lp: LineProfiler, path: str | Path) -> Path:
"""Save profiler stats to a .lprof file for later inspection."""
p = Path(path)
lp.dump_stats(str(p))
return p
def load_stats(path: str | Path) -> LineProfiler:
"""Load .lprof stats from disk. Returns a new LineProfiler with stats loaded."""
lp = LineProfiler()
lp.load_stats(str(path))
return lp
# ─────────────────────────────────────────────────────────────────────────────
# 3. Comparison helper
# ─────────────────────────────────────────────────────────────────────────────
def compare(
baseline: Callable,
improved: Callable,
*args: Any,
iterations: int = 3,
output_unit: float = 1e-3,
**kwargs: Any,
) -> dict:
"""
Run baseline and improved implementations multiple times and compare.
Returns dict with timing stats and speedup ratio.
Example:
from app.data import process_slow, process_fast
report = compare(process_slow, process_fast, large_data, iterations=5)
print(f"Speedup: {report['speedup']:.2f}x")
"""
def time_fn(fn: Callable) -> float:
times = []
lp = LineProfiler()
lp.add_function(fn)
for _ in range(iterations):
start = time.perf_counter()
lp.runcall(fn, *args, **kwargs)
times.append(time.perf_counter() - start)
return min(times), lp
base_time, base_lp = time_fn(baseline)
new_time, new_lp = time_fn(improved)
speedup = base_time / new_time if new_time > 0 else float("inf")
return {
"baseline_name": baseline.__name__,
"improved_name": improved.__name__,
"baseline_min_ms": base_time * 1000,
"improved_min_ms": new_time * 1000,
"speedup": speedup,
"baseline_profiler": base_lp,
"improved_profiler": new_lp,
}
# ─────────────────────────────────────────────────────────────────────────────
# 4. pytest fixture
# ─────────────────────────────────────────────────────────────────────────────
PYTEST_FIXTURE = '''
# conftest.py — line_profiler pytest fixture
import pytest
from line_profiler import LineProfiler
from app.profiling import get_hot_lines, stats_to_string
@pytest.fixture
def line_profile():
"""
Pytest fixture that returns a LineProfiler instance.
Usage in tests:
def test_my_function(line_profile):
line_profile.add_function(my_module.hot_fn)
line_profile.enable_by_count()
result = my_module.hot_fn(test_input)
line_profile.disable_by_count()
line_profile.print_stats(output_unit=1e-3)
hot = get_hot_lines(line_profile, top_n=5)
assert hot[0]["percent"] < 80.0 # no single line dominates
"""
lp = LineProfiler()
yield lp
# Optional: print stats after test even on failure
try:
lp.print_stats(output_unit=1e-3)
except Exception:
pass
'''
# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────
def _slow_sum(data: list) -> float:
"""Intentionally slow example function for demonstration."""
total = 0.0
for i, val in enumerate(data): # line A — loop overhead
total += val * 1.1 # line B — multiply
if val > 500: # line C — conditional
total -= 0.01 * val # line D — adjustment
result = total / max(len(data), 1) # line E — average
return result
def _fast_sum(data: list) -> float:
"""Faster version for comparison."""
import statistics
return statistics.mean(data)
if __name__ == "__main__":
data = list(range(1, 10_001))
print("=== profile_function ===")
result, lp = profile_function(_slow_sum, data, output_unit=1e-6, print_output=False)
print(f" Result: {result:.4f}")
hot = get_hot_lines(lp, top_n=5, output_unit=1e-6)
print(f" Top lines:")
for row in hot:
print(f" Line {row['line']}: {row['time_ms']:.2f}µs ({row['percent']:.1f}%)")
print("\n=== profiling_ctx ===")
with profiling_ctx(_slow_sum, output_unit=1e-3, print_output=False) as lp2:
for _ in range(5):
_slow_sum(data)
print(f" Profiled 5 calls; {len(lp2.get_stats().timings)} function(s) recorded")
print("\n=== compare: slow vs fast ===")
report = compare(_slow_sum, _fast_sum, data, iterations=3)
print(f" {report['baseline_name']}: {report['baseline_min_ms']:.2f}ms")
print(f" {report['improved_name']}: {report['improved_min_ms']:.2f}ms")
print(f" Speedup: {report['speedup']:.2f}x")
For the cProfile / pstats alternative — cProfile and pstats give cumulative function-level timing (total time per function call across all invocations) which is great for finding which functions are slow; line_profiler goes one level deeper showing per-line timing within a function, making it the right tool once you’ve already identified a hot function and need to know exactly which lines within it to optimize. For the pyinstrument alternative — pyinstrument uses statistical sampling to build a call-frame tree with minimal overhead (~0.2% vs 10–20% for instrumentation-based profilers); line_profiler instruments every line with exact counter-increments so it shows per-line hits and precise timing — use pyinstrument for exploratory profiling and line_profiler once you’ve found the hot function and need line-level surgery. The Claude Skills 360 bundle includes line_profiler skill sets covering profile_function() one-shot profiling, profiling_ctx() context manager, profiler_decorator() for persistent instrumentation, get_hot_lines() sorted result extractor, stats_to_string() for CI output capture, save_stats()/load_stats() for .lprof persistence, compare() baseline vs improved speedup analysis, pytest fixture for test-integrated profiling, and kernprof CLI workflow. Start with the free tier to try line-by-line Python performance profiling code generation.