Coverage.py measures which lines and branches of Python code are executed by tests. pip install coverage[toml]. Run tests: coverage run -m pytest tests/. Report: coverage report --show-missing. HTML: coverage html (opens htmlcov/index.html). XML (for CI): coverage xml. JSON: coverage json. Erase: coverage erase. Combine: coverage combine. Configure pyproject.toml: [tool.coverage.run] source=["src"] branch=true omit=["tests/*","*__init__*"]. [tool.coverage.report] fail_under=80 show_missing=true skip_covered=false. [tool.coverage.html] directory="htmlcov". Run directly: coverage run --source=src --branch -m pytest. Parallel: coverage run --parallel-mode -m pytest -n auto && coverage combine. Pytest plugin: pytest --cov=src --cov-report=html --cov-report=term-missing. --cov-fail-under=80 --cov-branch. Exclude: # pragma: no cover on line or block. Context: coverage run --context=integration — label which tests hit which lines. Contexts report: coverage report --contexts. Debug: coverage debug sys. Check: coverage check — just compare to threshold. Path map: [tool.coverage.paths] — map CI paths to local paths. source_pkgs=["mypackage"] — use package name not path. [tool.coverage.report] exclude_lines = ["if TYPE_CHECKING:","raise NotImplementedError","@(abc\\.)?abstractmethod"]. Claude Code generates Coverage.py configurations, pytest-cov CI pipelines, and HTML coverage badge workflows.
CLAUDE.md for Coverage.py
## Coverage.py Stack
- Version: coverage >= 7.4 | pip install "coverage[toml]" "pytest-cov"
- Run: coverage run -m pytest | pytest --cov=src --cov-branch
- Report: coverage report --show-missing | coverage html | coverage xml
- Config: pyproject.toml [tool.coverage.run] source=, branch=true, omit=
- Threshold: fail_under=80 in [tool.coverage.report] | --cov-fail-under=
- Exclude: # pragma: no cover | exclude_lines patterns in pyproject.toml
- Parallel: coverage run --parallel-mode + coverage combine
Coverage.py Measurement Pipeline
# This file demonstrates Coverage.py configuration and usage patterns.
# The main config lives in pyproject.toml (shown below as a string).
# ── pyproject.toml coverage configuration ────────────────────────────────────
PYPROJECT_TOML_COVERAGE = """
[tool.coverage.run]
# Track which source files are instrumented
source = ["src"]
# Branch coverage: track which conditional branches are taken
branch = true
# Files/directories to exclude from measurement
omit = [
"tests/*",
"*/migrations/*",
"*/__init__.py",
"src/*/version.py",
"setup.py",
"conftest.py",
]
# Parallel mode: each process writes its own .coverage.XXXXXXXX file
# Combine them afterward with `coverage combine`
parallel = false
# Label runs with a context name (e.g., "unit", "integration")
# context = "unit"
[tool.coverage.report]
# Minimum coverage percentage — fails if below
fail_under = 80
# Show line numbers of uncovered lines
show_missing = true
# Skip files that are 100% covered in the report
skip_covered = false
# Skip empty files
skip_empty = true
# Precision for percentage display
precision = 1
# Regex patterns for lines to exclude from coverage
exclude_lines = [
# Default pragma
"pragma: no cover",
# Abstract methods — implementations are tested, not declarations
"@(abc\\.)?abstractmethod",
# Type checking imports — not executed at runtime
"if TYPE_CHECKING:",
# Protocol/interface bodies
"\\.\\.\\.",
# Not-implemented methods
"raise NotImplementedError",
# Debug-mode only code
"if __debug__:",
"if settings.DEBUG",
]
[tool.coverage.html]
directory = "htmlcov"
title = "Test Coverage Report"
show_contexts = false
[tool.coverage.xml]
output = "coverage.xml"
[tool.coverage.json]
output = "coverage.json"
show_contexts = false
[tool.coverage.paths]
# Map CI paths to local paths for combined reports
# Useful when tests run inside Docker with different paths
source = [
"src/",
"/app/src/",
]
"""
# ── .coveragerc alternative format ───────────────────────────────────────────
COVERAGERC = """
[run]
source = src
branch = True
omit =
tests/*
*/migrations/*
*/__init__.py
[report]
fail_under = 80
show_missing = True
exclude_lines =
pragma: no cover
if TYPE_CHECKING:
raise NotImplementedError
@abstractmethod
[html]
directory = htmlcov
"""
# ── CI/CD integration examples ────────────────────────────────────────────────
GITHUB_ACTIONS_WORKFLOW = """
# .github/workflows/test.yml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: pip install -e ".[dev]"
- name: Run tests with coverage
run: |
pytest tests/ \\
--cov=src \\
--cov-branch \\
--cov-report=xml:coverage.xml \\
--cov-report=term-missing \\
--cov-fail-under=80 \\
-q
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
file: ./coverage.xml
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}
"""
GITLAB_CI_PIPELINE = """
# .gitlab-ci.yml excerpt
test:
image: python:3.12
script:
- pip install -e ".[dev]"
- coverage run --branch -m pytest tests/ -q
- coverage report --fail-under=80
- coverage xml
coverage: '/^TOTAL.+?(\\d+%)$/' # regex to parse % for GitLab badge
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
"""
# ── Parallel test run coverage ────────────────────────────────────────────────
PARALLEL_COVERAGE_SCRIPT = """
# Run tests in parallel and combine coverage data
# 1. Run each worker with parallel mode
pytest tests/ -n auto --cov=src --cov-branch --no-cov-on-fail
# When using raw coverage (not pytest-cov):
# export COVERAGE_PROCESS_START=.coveragerc
# coverage run --parallel-mode --branch -m pytest tests/ -n auto
# coverage combine # merge .coverage.* files
# coverage report --show-missing
# Alternatively with pytest-cov (handles combine automatically):
pytest tests/ \\
--cov=src \\
--cov-branch \\
--cov-report=term-missing \\
--cov-report=html \\
--cov-report=xml \\
-n auto
"""
# ── Example code with pragma: no cover usage ──────────────────────────────────
from __future__ import annotations
import sys
from typing import TYPE_CHECKING
if TYPE_CHECKING: # pragma: no cover
from collections.abc import Sequence
def calculate_discount(
price: float,
quantity: int,
vip: bool = False,
) -> float:
"""
Calculate order discount. Fully testable — no pragmas needed.
Branches:
(quantity < 10, quantity >= 10)
(vip, not vip)
(high_qty + vip together)
"""
discount = 0.0
if quantity >= 10:
discount += 0.10 # bulk discount
if vip:
discount += 0.05 # VIP discount
if quantity >= 50 and vip:
discount += 0.05 # high-volume VIP bonus
return price * (1 - discount)
def get_platform() -> str: # pragma: no cover
"""Platform detection — not tested because it varies by environment."""
if sys.platform == "win32":
return "windows"
elif sys.platform == "darwin":
return "macos"
return "linux"
class DatabaseConnection:
def connect(self, dsn: str) -> None:
"""Real connection — mocked in tests, excluded from coverage check."""
pass # pragma: no cover
def execute(self, sql: str, params: tuple = ()) -> list:
"""Core SQL logic — fully tested via mock."""
if not sql.strip():
raise ValueError("SQL cannot be empty")
return []
def close(self) -> None: # pragma: no cover
"""Cleanup that only runs in production environments."""
pass
# ── pytest conftest.py integration ───────────────────────────────────────────
CONFTEST_COVERAGE = """
# conftest.py — coverage hooks for context tagging
import coverage
import pytest
def pytest_configure(config):
\"\"\"Label the coverage context based on test markers.\"\"\"
pass # coverage context is set via pytest-cov --cov-context=test
@pytest.fixture(scope="session", autouse=True)
def coverage_context(request):
\"\"\"Log which test touched which code for context-based reporting.\"\"\"
yield # pytest-cov handles context automatically with --cov-context=test
"""
# ── Coverage badge generation ─────────────────────────────────────────────────
BADGE_SCRIPT = """
# Generate a coverage badge after tests
pip install coverage-badge
coverage-badge -o coverage.svg -f # overwrite existing
# For shields.io dynamic badge from coverage.json:
# 1. Run: coverage json -o coverage.json
# 2. Parse: jq '.totals.percent_covered_display' coverage.json
# 3. In README:
# 
"""
# ── Makefile targets ──────────────────────────────────────────────────────────
MAKEFILE_TARGETS = """
# Makefile targets — use `make coverage` in development
test:
\\tpytest tests/ -q
coverage:
\\tcoverage erase
\\tcoverage run --branch -m pytest tests/ -q
\\tcoverage report --show-missing
\\tcoverage html
\\t@echo "HTML report: htmlcov/index.html"
coverage-ci:
\\tcoverage run --branch -m pytest tests/ -q
\\tcoverage report --fail-under=80
\\tcoverage xml
coverage-clean:
\\tcoverage erase
\\trm -rf htmlcov/ coverage.xml coverage.json .coverage*
"""
if __name__ == "__main__":
print("Coverage.py Measurement Examples")
print("=" * 50)
print("\nBasic workflow:")
print(" 1. pip install coverage[toml] pytest-cov")
print(" 2. Add [tool.coverage.run] to pyproject.toml")
print(" 3. pytest --cov=src --cov-branch --cov-report=html")
print(" 4. open htmlcov/index.html")
print("\nCI workflow:")
print(" pytest --cov=src --cov-branch --cov-report=xml --cov-fail-under=80")
print(" → uploads coverage.xml to Codecov/SonarQube")
print("\nBranch coverage shows:")
print(" 'missing branch' = a condition whose True/False path was never tested")
print(" Example: if x > 0: ... — must test both x=5 and x=-1")
print("\nExclude boilerplate with # pragma: no cover")
print(" on: platform detection, debug paths, abstract method bodies")
For the sys.settrace manual tracing alternative — writing a custom trace function covers line execution but misses branch decisions (was the else clause ever taken?) while coverage run --branch tracks both line execution and every conditional branch, and coverage report --show-missing shows line numbers like 45-47, 92->94 meaning lines 45–47 are uncovered and the jump from line 92 to 94 (a branch not taken) was never exercised. For the pytest-cov plugin alternative — pytest-cov is a thin wrapper around coverage.py that calls coverage.start() before tests and coverage.stop() / coverage.save() after, so all [tool.coverage.*] configuration in pyproject.toml applies identically whether you invoke coverage run -m pytest or pytest --cov, and --cov-fail-under=80 maps directly to fail_under = 80 in the config file, making CI thresholds enforceable in one place. The Claude Skills 360 bundle includes Coverage.py skill sets covering pyproject.toml configuration, branch coverage setup, omit and exclude_lines patterns, pytest —cov flags, HTML/XML/JSON report generation, fail_under CI threshold, parallel mode with coverage combine, GitHub Actions Codecov upload, GitLab CI cobertura artifact, coverage badge generation, and pragma: no cover patterns. Start with the free tier to try test coverage code generation.