Claude Code for tomllib: TOML Config Parsing in Python — Claude Skills 360 Blog
Blog / AI / Claude Code for tomllib: TOML Config Parsing in Python
AI

Claude Code for tomllib: TOML Config Parsing in Python

Published: June 13, 2028
Read time: 5 min read
By: Claude Skills 360

tomllib is the Python 3.11+ stdlib TOML parser (backport: pip install tomli). Python ≥3.11: import tomllib. Python <3.11: pip install tomli; import tomli as tomllib. Load file: with open("config.toml", "rb") as f: cfg = tomllib.load(f) — binary mode required. Load string: cfg = tomllib.loads('[server]\nport = 8080'). TOMLDecodeError: try: tomllib.loads(s) except tomllib.TOMLDecodeError as e: .... Nested: cfg["server"]["host"]. Array: cfg["servers"] → list. Inline table: {host="localhost",port=5432} → dict. Date: cfg["date"] → datetime.date. Datetime: cfg["created"] → datetime.datetime. Bool: cfg["debug"] → True/False. Int: cfg["port"] → 8080. Float: cfg["ratio"] → 0.95. Multi-line string: """...""". Write TOML: pip install tomli_w; import tomli_w; tomli_w.dumps({"key": "val"}). tomli_w.dump(obj, file). pyproject.toml: with open("pyproject.toml","rb") as f: meta = tomllib.load(f); name=meta["project"]["name"]. env override: os.environ.get("HOST") or cfg["server"]["host"]. Merge: {**defaults, **cfg}. validate: combine with pydantic or dataclasses for typed config. Claude Code generates tomllib config loaders, pyproject.toml readers, typed config dataclasses, and environment-layered settings.

CLAUDE.md for tomllib

## tomllib Stack
- Python >= 3.11: import tomllib | Python < 3.11: pip install tomli; import tomli as tomllib
- Load file: with open("cfg.toml", "rb") as f: cfg = tomllib.load(f)  # binary mode
- Load str: cfg = tomllib.loads(toml_string)
- Write: pip install tomli_w; import tomli_w; tomli_w.dumps(obj)
- Error: except tomllib.TOMLDecodeError as e: print(e)
- Pattern: load → env override → validate with dataclass/pydantic

tomllib Configuration Pipeline

# app/config.py — tomllib load, merge, env override, typed dataclass, pyproject reader
from __future__ import annotations

import os
import sys
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any

# Python 3.11+ stdlib — backport for older versions
if sys.version_info >= (3, 11):
    import tomllib
else:
    try:
        import tomli as tomllib  # pip install tomli
    except ImportError as exc:
        raise ImportError("Install tomli for Python < 3.11: pip install tomli") from exc

# Optional write support
try:
    import tomli_w
    _HAS_TOMLI_W = True
except ImportError:
    _HAS_TOMLI_W = False


# ─────────────────────────────────────────────────────────────────────────────
# 1. Load helpers
# ─────────────────────────────────────────────────────────────────────────────

def load_file(path: str | Path) -> dict:
    """
    Load a TOML file. Binary mode is required by tomllib.

    Example:
        cfg = load_file("config.toml")
        host = cfg["server"]["host"]
    """
    with open(path, "rb") as f:
        return tomllib.load(f)


def load_string(text: str) -> dict:
    """
    Parse a TOML string.

    Example:
        cfg = load_string('[db]\\nhost = "localhost"\\nport = 5432')
    """
    return tomllib.loads(text)


def try_load(path: str | Path, default: dict | None = None) -> dict:
    """
    Load a TOML file; return default dict on missing file or parse error.

    Example:
        cfg = try_load("local.toml", default={})
    """
    try:
        return load_file(path)
    except FileNotFoundError:
        return default if default is not None else {}
    except tomllib.TOMLDecodeError as e:
        raise ValueError(f"Invalid TOML in {path}: {e}") from e


# ─────────────────────────────────────────────────────────────────────────────
# 2. Merge and layering
# ─────────────────────────────────────────────────────────────────────────────

def deep_merge(base: dict, override: dict) -> dict:
    """
    Recursively merge override into base (override wins on conflicts).

    Example:
        defaults = {"server": {"host": "0.0.0.0", "port": 8080, "workers": 4}}
        local    = {"server": {"host": "localhost"}}
        merged   = deep_merge(defaults, local)
        # {"server": {"host": "localhost", "port": 8080, "workers": 4}}
    """
    result = base.copy()
    for key, val in override.items():
        if key in result and isinstance(result[key], dict) and isinstance(val, dict):
            result[key] = deep_merge(result[key], val)
        else:
            result[key] = val
    return result


def load_layered(
    *paths: str | Path,
    ignore_missing: bool = True,
) -> dict:
    """
    Load multiple TOML files in order, merging each on top of the previous.
    Later files override earlier ones.

    Example:
        cfg = load_layered(
            "config.default.toml",
            "config.toml",
            f"config.{os.getenv('ENV','dev')}.toml",
        )
    """
    result: dict = {}
    for path in paths:
        try:
            data = load_file(path)
            result = deep_merge(result, data)
        except FileNotFoundError:
            if not ignore_missing:
                raise
    return result


# ─────────────────────────────────────────────────────────────────────────────
# 3. Environment variable overlay
# ─────────────────────────────────────────────────────────────────────────────

def apply_env_overrides(
    cfg: dict,
    prefix: str = "APP_",
    separator: str = "__",
) -> dict:
    """
    Override cfg values from environment variables.
    Variable names: PREFIX__SECTION__KEY (mapped to nested cfg keys)

    Example:
        os.environ["APP__SERVER__PORT"] = "9090"
        cfg = apply_env_overrides(cfg, prefix="APP__")
        cfg["server"]["port"]  # "9090" (as string)
    """
    result = {k: v for k, v in cfg.items()}

    for env_key, env_val in os.environ.items():
        if not env_key.startswith(prefix):
            continue
        parts = env_key[len(prefix):].lower().split(separator)
        target = result
        for part in parts[:-1]:
            if part not in target:
                target[part] = {}
            target = target[part]
        leaf_key = parts[-1]
        # Try to cast numeric/bool values
        if env_val.lower() in ("true", "yes", "1"):
            target[leaf_key] = True
        elif env_val.lower() in ("false", "no", "0"):
            target[leaf_key] = False
        else:
            try:
                target[leaf_key] = int(env_val)
            except ValueError:
                try:
                    target[leaf_key] = float(env_val)
                except ValueError:
                    target[leaf_key] = env_val

    return result


# ─────────────────────────────────────────────────────────────────────────────
# 4. Typed config with dataclasses
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class ServerConfig:
    host:    str   = "0.0.0.0"
    port:    int   = 8080
    workers: int   = 4
    debug:   bool  = False
    reload:  bool  = False


@dataclass
class DatabaseConfig:
    url:          str   = "sqlite:///app.db"
    pool_size:    int   = 5
    pool_timeout: float = 30.0
    echo:         bool  = False


@dataclass
class LoggingConfig:
    level:  str = "INFO"
    format: str = "json"
    file:   str | None = None


@dataclass
class AppConfig:
    server:   ServerConfig   = field(default_factory=ServerConfig)
    database: DatabaseConfig = field(default_factory=DatabaseConfig)
    logging:  LoggingConfig  = field(default_factory=LoggingConfig)
    debug:    bool            = False
    version:  str             = "0.0.0"


def dict_to_dataclass(data: dict, cls):
    """
    Populate a dataclass from a dict, recursively for nested dataclasses.
    Extra keys are silently ignored.

    Example:
        cfg = dict_to_dataclass(raw_config, AppConfig)
    """
    import dataclasses
    if not dataclasses.is_dataclass(cls):
        return data

    kwargs = {}
    for f in dataclasses.fields(cls):
        if f.name not in data:
            continue
        val = data[f.name]
        if dataclasses.is_dataclass(f.type) and isinstance(val, dict):
            val = dict_to_dataclass(val, f.type)
        elif isinstance(f.default_factory, type) and dataclasses.is_dataclass(f.default_factory):  # type: ignore
            pass
        kwargs[f.name] = val
    return cls(**kwargs)


def load_app_config(
    path: str | Path = "config.toml",
    env_prefix: str = "APP__",
) -> AppConfig:
    """
    Load and validate app configuration from TOML + environment.

    Example:
        cfg = load_app_config("config.toml")
        print(cfg.server.port)
        print(cfg.database.url)
    """
    raw = try_load(path, default={})
    raw = apply_env_overrides(raw, prefix=env_prefix)
    return dict_to_dataclass(raw, AppConfig)


# ─────────────────────────────────────────────────────────────────────────────
# 5. pyproject.toml reader
# ─────────────────────────────────────────────────────────────────────────────

def read_pyproject(path: str | Path = "pyproject.toml") -> dict:
    """
    Read a pyproject.toml and return its contents.

    Example:
        meta = read_pyproject()
        name = meta.get("project", {}).get("name", "unknown")
        deps = meta.get("project", {}).get("dependencies", [])
    """
    return try_load(path, default={})


def get_project_meta(pyproject: str | Path = "pyproject.toml") -> dict:
    """
    Extract common project metadata from pyproject.toml.

    Example:
        meta = get_project_meta()
        print(f"{meta['name']} v{meta['version']}")
    """
    data    = read_pyproject(pyproject)
    project = data.get("project", {})
    tool    = data.get("tool", {})

    return {
        "name":        project.get("name", ""),
        "version":     project.get("version", ""),
        "description": project.get("description", ""),
        "authors":     project.get("authors", []),
        "requires_python": project.get("requires-python", ""),
        "dependencies":    project.get("dependencies", []),
        "optional_deps":   project.get("optional-dependencies", {}),
        "scripts":         project.get("scripts", {}),
        "build_backend":   data.get("build-system", {}).get("build-backend", ""),
    }


# ─────────────────────────────────────────────────────────────────────────────
# 6. Write support (requires tomli_w)
# ─────────────────────────────────────────────────────────────────────────────

def dump_toml(obj: dict) -> str:
    """
    Serialize a dict to TOML string.
    Requires: pip install tomli_w

    Example:
        text = dump_toml({"server": {"host": "localhost", "port": 8080}})
    """
    if not _HAS_TOMLI_W:
        raise ImportError("Install tomli_w to write TOML: pip install tomli_w")
    return tomli_w.dumps(obj)


def write_toml(obj: dict, path: str | Path) -> Path:
    """Write dict to a TOML file."""
    if not _HAS_TOMLI_W:
        raise ImportError("Install tomli_w to write TOML: pip install tomli_w")
    p = Path(path)
    p.parent.mkdir(parents=True, exist_ok=True)
    with open(p, "wb") as f:
        tomli_w.dump(obj, f)
    return p


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    SAMPLE_TOML = """
[server]
host    = "0.0.0.0"
port    = 8080
workers = 4
debug   = false

[database]
url       = "postgresql://user:pass@localhost/mydb"
pool_size = 10

[logging]
level  = "INFO"
format = "json"

version = "1.2.3"
"""

    print("=== load_string ===")
    raw = load_string(SAMPLE_TOML)
    print(f"  server.port:     {raw['server']['port']}")
    print(f"  database.url:    {raw['database']['url']}")
    print(f"  version:         {raw['version']}")

    print("\n=== deep_merge ===")
    defaults = {"server": {"host": "0.0.0.0", "port": 8080, "workers": 4}}
    override = {"server": {"host": "localhost", "debug": True}}
    merged   = deep_merge(defaults, override)
    print(f"  Merged server:   {merged['server']}")

    print("\n=== env overrides ===")
    os.environ["APP__SERVER__PORT"] = "9090"
    os.environ["APP__DEBUG"]        = "true"
    overridden = apply_env_overrides(raw, prefix="APP__")
    print(f"  server.port after env:  {overridden['server']['port']}")
    print(f"  debug after env:         {overridden.get('debug')}")

    print("\n=== typed config (dict_to_dataclass) ===")
    typed = dict_to_dataclass(raw, AppConfig)
    print(f"  AppConfig.server.port:    {typed.server.port}")
    print(f"  AppConfig.database.url:   {typed.database.url}")
    print(f"  AppConfig.logging.level:  {typed.logging.level}")
    print(f"  AppConfig.version:        {typed.version}")

    if _HAS_TOMLI_W:
        print("\n=== dump_toml ===")
        text = dump_toml({"project": {"name": "myapp", "version": "1.0.0"}})
        print(f"  {text.strip()}")
    else:
        print("\n(install tomli_w for write: pip install tomli_w)")

For the configparser stdlib alternative — configparser handles INI-style files (key=value under [section]) and is built into Python; TOML supports typed values (integers, booleans, datetimes, arrays, nested tables) and is the standard for pyproject.toml and modern Python project config — use configparser for simple legacy INI files, TOML for new configuration files where type fidelity matters. For the pydantic-settings alternative — pydantic-settings loads configuration from environment variables, dotenv files, and arbitrary sources with full Pydantic v2 type validation and coercion; tomllib is a pure TOML parser with no validation layer — combine tomllib with a pydantic Settings model by loading the TOML file then feeding the dict to Settings(**cfg) for full type validation. The Claude Skills 360 bundle includes tomllib skill sets covering load_file()/load_string()/try_load(), deep_merge()/load_layered() multi-file layering, apply_env_overrides() with prefix/separator, dict_to_dataclass() typed dataclass binding, AppConfig/ServerConfig/DatabaseConfig dataclasses, load_app_config() one-call setup, read_pyproject()/get_project_meta(), and dump_toml()/write_toml() with tomli_w. Start with the free tier to try TOML configuration management code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free