Claude Code for tabnanny: Python Indentation Checker — Claude Skills 360 Blog
Blog / AI / Claude Code for tabnanny: Python Indentation Checker
AI

Claude Code for tabnanny: Python Indentation Checker

Published: January 15, 2029
Read time: 5 min read
By: Claude Skills 360

Python’s tabnanny module checks source files for ambiguous indentation — lines where mixing tabs and spaces makes the indent level unclear. import tabnanny. Check file: tabnanny.check("script.py") — raises tabnanny.NannyNag on first problem found. Check directory: tabnanny.check("src/") — recurses through .py files. Exception: tabnanny.NannyNag has attributes .filename, .lineno, .msg. Verbosity: tabnanny.verbose = True — prints each filename processed. Filename only: tabnanny.filename_only = True — prints only the bad filenames rather than full line details. CLI: python -m tabnanny script.py or python -m tabnanny src/. The module is primarily useful as a pre-commit hook or CI step for Python 2 codebases or mixed-source repos where tabs-vs-spaces ambiguity matters; in Python 3 TabError is raised at compile time for the most egregious cases but tabnanny catches the subtler ambiguous ones. Claude Code generates pre-commit indentation linters, mixed-indentation auditors, automated code style fixers, and CI indentation check scripts.

CLAUDE.md for tabnanny

## tabnanny Stack
- Stdlib: import tabnanny
- Check file:  tabnanny.check("script.py")    # raises NannyNag on problem
- Check dir:   tabnanny.check("src/")         # recurses .py files
- Exception:   tabnanny.NannyNag
-              .filename / .lineno / .msg
- Verbose:     tabnanny.verbose = True        # print each file checked
- Filenames:   tabnanny.filename_only = True  # one filename per bad file
- CLI:         python -m tabnanny path/

tabnanny Indentation Lint Pipeline

# app/tabnannyutil.py — file check, dir scan, report, tokenize audit, fixer
from __future__ import annotations

import io
import os
import sys
import tabnanny
import tokenize
from dataclasses import dataclass, field
from pathlib import Path


# ─────────────────────────────────────────────────────────────────────────────
# 1. Single-file check
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class IndentIssue:
    filename: str
    lineno:   int
    message:  str


def check_file(path: "str | Path") -> list[IndentIssue]:
    """
    Check a single Python file for ambiguous indentation.
    Returns a list of IndentIssue (empty = clean).

    Example:
        issues = check_file("script.py")
        if issues:
            for issue in issues:
                print(f"{issue.filename}:{issue.lineno}  {issue.message}")
    """
    issues: list[IndentIssue] = []
    path = str(path)
    # tabnanny.check raises NannyNag or prints; capture via exception
    try:
        tabnanny.check(path)
    except tabnanny.NannyNag as nag:
        issues.append(IndentIssue(
            filename=nag.filename,
            lineno=nag.lineno,
            message=nag.msg,
        ))
    except SyntaxError as e:
        issues.append(IndentIssue(
            filename=path,
            lineno=e.lineno or 0,
            message=f"SyntaxError: {e.msg}",
        ))
    except Exception as e:
        issues.append(IndentIssue(
            filename=path,
            lineno=0,
            message=str(e),
        ))
    return issues


# ─────────────────────────────────────────────────────────────────────────────
# 2. Directory scanner
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class ScanReport:
    root:        str
    files_checked: int = 0
    issues:      list[IndentIssue] = field(default_factory=list)

    @property
    def ok(self) -> bool:
        return len(self.issues) == 0

    def summary(self) -> str:
        if self.ok:
            return (f"tabnanny: {self.files_checked} files checked — "
                    f"no indentation issues")
        return (f"tabnanny: {self.files_checked} files checked — "
                f"{len(self.issues)} issue(s) found")


def scan_directory(root: "str | Path", recursive: bool = True) -> ScanReport:
    """
    Scan all .py files under root for indentation issues.

    Example:
        report = scan_directory("src/")
        print(report.summary())
        for issue in report.issues:
            print(f"  {issue.filename}:{issue.lineno}  {issue.message}")
    """
    root = Path(root)
    pattern = "**/*.py" if recursive else "*.py"
    py_files = list(root.glob(pattern))
    report = ScanReport(root=str(root))
    report.files_checked = len(py_files)
    for py_file in py_files:
        issues = check_file(py_file)
        report.issues.extend(issues)
    return report


# ─────────────────────────────────────────────────────────────────────────────
# 3. tokenize-level mixed-indent detector
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class MixedIndentLine:
    lineno: int
    text:   str
    has_tab: bool
    has_space: bool


def detect_mixed_indent(source: str) -> list[MixedIndentLine]:
    """
    Find lines whose leading whitespace mixes tabs and spaces.
    Works on Python 3 source where TabError may not be raised for subtle mixes.

    Example:
        source = open("legacy.py").read()
        problems = detect_mixed_indent(source)
        for p in problems:
            print(f"  line {p.lineno}: {p.text!r}")
    """
    problems: list[MixedIndentLine] = []
    for lineno, line in enumerate(source.splitlines(), 1):
        leading = line[: len(line) - len(line.lstrip())]
        if not leading:
            continue
        has_tab = "\t" in leading
        has_space = " " in leading
        if has_tab and has_space:
            problems.append(MixedIndentLine(
                lineno=lineno,
                text=line,
                has_tab=has_tab,
                has_space=has_space,
            ))
    return problems


# ─────────────────────────────────────────────────────────────────────────────
# 4. Indentation statistics
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class IndentStats:
    filename:        str
    total_lines:     int = 0
    indented_lines:  int = 0
    tab_only_lines:  int = 0
    space_only_lines: int = 0
    mixed_lines:     int = 0
    dominant_style:  str = "unknown"   # "tabs", "spaces", "mixed", "none"


def analyze_indent_style(path: "str | Path") -> IndentStats:
    """
    Return indentation statistics for a Python source file.

    Example:
        stats = analyze_indent_style("app.py")
        print(stats.dominant_style, stats.mixed_lines)
    """
    path = Path(path)
    stats = IndentStats(filename=str(path))
    try:
        source = path.read_text(encoding="utf-8", errors="replace")
    except OSError:
        return stats

    for line in source.splitlines():
        stats.total_lines += 1
        leading = line[: len(line) - len(line.lstrip())]
        if not leading:
            continue
        stats.indented_lines += 1
        has_tab = "\t" in leading
        has_space = " " in leading
        if has_tab and has_space:
            stats.mixed_lines += 1
        elif has_tab:
            stats.tab_only_lines += 1
        else:
            stats.space_only_lines += 1

    if stats.indented_lines == 0:
        stats.dominant_style = "none"
    elif stats.mixed_lines > 0:
        stats.dominant_style = "mixed"
    elif stats.tab_only_lines > stats.space_only_lines:
        stats.dominant_style = "tabs"
    elif stats.space_only_lines > 0:
        stats.dominant_style = "spaces"
    else:
        stats.dominant_style = "none"

    return stats


# ─────────────────────────────────────────────────────────────────────────────
# 5. Tab-to-spaces fixer (in memory)
# ─────────────────────────────────────────────────────────────────────────────

def expand_leading_tabs(source: str, tabsize: int = 4) -> str:
    """
    Convert leading tabs to spaces (tabsize spaces per tab) in every line.
    Only modifies leading whitespace — tabs inside strings are untouched.

    Example:
        fixed = expand_leading_tabs(open("old.py").read())
        open("fixed.py", "w").write(fixed)
    """
    lines: list[str] = []
    for line in source.splitlines(keepends=True):
        stripped = line.lstrip("\t ")
        leading = line[: len(line) - len(stripped)]
        # expand tabs in leading whitespace only
        expanded_leading = leading.expandtabs(tabsize)
        lines.append(expanded_leading + stripped)
    return "".join(lines)


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    import tempfile

    print("=== tabnanny demo ===")

    # ── check_file on clean source ────────────────────────────────────────────
    print("\n--- check_file (clean) ---")
    clean_src = "def f():\n    x = 1\n    return x\n"
    with tempfile.NamedTemporaryFile(suffix=".py", mode="w",
                                     delete=False) as tf:
        tf.write(clean_src)
        clean_path = tf.name
    issues = check_file(clean_path)
    print(f"  issues on clean file: {issues}")
    os.unlink(clean_path)

    # ── check_file on ambiguous source ────────────────────────────────────────
    print("\n--- check_file (ambiguous) ---")
    # mix tabs and spaces in a way that is ambiguous
    bad_src = "def f():\n\tif True:\n\t    x = 1\n"  # tab+spaces mix
    with tempfile.NamedTemporaryFile(suffix=".py", mode="w",
                                     delete=False) as tf:
        tf.write(bad_src)
        bad_path = tf.name
    issues = check_file(bad_path)
    if issues:
        for issue in issues:
            print(f"  {issue.filename}:{issue.lineno}  {issue.message}")
    else:
        print("  (no issues detected — Python version may handle this)")
    os.unlink(bad_path)

    # ── detect_mixed_indent ───────────────────────────────────────────────────
    print("\n--- detect_mixed_indent ---")
    mixed_src = (
        "def f():\n"
        "    x = 1\n"          # 4 spaces
        "\t y = 2\n"           # tab + space
        "    return x\n"
    )
    problems = detect_mixed_indent(mixed_src)
    for p in problems:
        print(f"  line {p.lineno}: {p.text!r}")

    # ── analyze_indent_style ──────────────────────────────────────────────────
    print("\n--- analyze_indent_style (this file) ---")
    stats = analyze_indent_style(__file__)
    print(f"  total_lines={stats.total_lines}")
    print(f"  indented_lines={stats.indented_lines}")
    print(f"  tab_only={stats.tab_only_lines}  "
          f"space_only={stats.space_only_lines}  "
          f"mixed={stats.mixed_lines}")
    print(f"  dominant_style={stats.dominant_style!r}")

    # ── expand_leading_tabs ───────────────────────────────────────────────────
    print("\n--- expand_leading_tabs ---")
    tabbed = "def g():\n\tx = 1\n\t\ty = 2\n"
    fixed = expand_leading_tabs(tabbed, tabsize=4)
    print("  input:")
    for line in tabbed.splitlines():
        print(f"    {line!r}")
    print("  fixed:")
    for line in fixed.splitlines():
        print(f"    {line!r}")

    print("\n=== done ===")

For the pycodestyle / flake8 W1 (PyPI) alternative — pycodestyle.StyleGuide().check_files(["script.py"]) checks indentation consistency (W191 for tabs, E1xx for spacing) along with the full PEP 8 style — use pycodestyle/flake8 for PEP 8 enforcement and CI linting pipelines; use tabnanny when you only need to catch the narrow class of tab/space ambiguity that Python’s tokenizer would misparse, especially in Python 2 compatibility work or repositories where tabs intentionally appear. For the autopep8 / black (PyPI) alternative — autopep8.fix_code(source) or black.format_str(source, mode=black.Mode()) automatically reformat code to consistent space-based indentation — use black or autopep8 for automated indentation normalization in modern Python codebases; use tabnanny + expand_leading_tabs() for lighter-weight detection and programmatic fixes when you can’t take a full formatter dependency. The Claude Skills 360 bundle includes tabnanny skill sets covering check_file()/IndentIssue single-file checker, scan_directory()/ScanReport recursive scanner, detect_mixed_indent()/MixedIndentLine tokenize-level detector, analyze_indent_style()/IndentStats statistics, and expand_leading_tabs() in-memory fixer. Start with the free tier to try indentation linting patterns and tabnanny pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free