Claude Code for importlib: Python Dynamic Import and Resource Access — Claude Skills 360 Blog
Blog / AI / Claude Code for importlib: Python Dynamic Import and Resource Access
AI

Claude Code for importlib: Python Dynamic Import and Resource Access

Published: October 1, 2028
Read time: 5 min read
By: Claude Skills 360

Python’s importlib module provides the implementation of Python’s import system and utilities for dynamic imports, module reloading, and package resource access. import importlib. import_module: mod = importlib.import_module("json") — same as import json; importlib.import_module(".sub", package="pkg") for relative. reload: importlib.reload(mod) — re-execute the module’s code (useful for hot-reload in dev). find_spec: spec = importlib.util.find_spec("numpy")ModuleSpec or None for not-installed. spec_from_file_location: spec = importlib.util.spec_from_file_location("mymod", "/path/to/file.py"). module_from_spec: mod = importlib.util.module_from_spec(spec); spec.loader.exec_module(mod) — load arbitrary .py file. resources.files: ref = importlib.resources.files("mypackage") / "data/config.json"Traversable. ref.read_text(), ref.read_bytes(). as_file: with importlib.resources.as_file(ref) as path: — guaranteed real Path. metadata.version: importlib.metadata.version("requests")"2.31.0". entry_points: importlib.metadata.entry_points(group="console_scripts") → list. requires: importlib.metadata.requires("flask") → list of requirement strings. importlib.metadata.packages_distributions() → dict. Claude Code generates plugin loaders, hot-reload utilities, embedded resource readers, and installed-package inspectors.

CLAUDE.md for importlib

## importlib Stack
- Stdlib: import importlib, importlib.util, importlib.resources, importlib.metadata
- Import: mod = importlib.import_module("json")
- Find:   spec = importlib.util.find_spec("numpy")  # None if not installed
- File:   spec = importlib.util.spec_from_file_location("name", path)
          mod  = importlib.util.module_from_spec(spec)
          spec.loader.exec_module(mod)
- Resource: ref = importlib.resources.files("pkg") / "data/file.txt"
            text = ref.read_text(encoding="utf-8")
- Meta:   importlib.metadata.version("requests")

importlib Dynamic Import Pipeline

# app/importutil.py — import, find, load from file, resources, metadata, plugin
from __future__ import annotations

import importlib
import importlib.metadata
import importlib.resources
import importlib.util
import sys
from dataclasses import dataclass
from pathlib import Path
from types import ModuleType
from typing import Any


# ─────────────────────────────────────────────────────────────────────────────
# 1. Dynamic import helpers
# ─────────────────────────────────────────────────────────────────────────────

def import_module(name: str, package: str | None = None) -> ModuleType:
    """
    Import a module by name string.

    Example:
        json   = import_module("json")
        sub    = import_module(".utils", package="myapp")
    """
    return importlib.import_module(name, package=package)


def try_import(name: str, default: Any = None) -> ModuleType | Any:
    """
    Import a module, returning default on ImportError.

    Example:
        numpy = try_import("numpy")
        if numpy is None:
            print("numpy not installed")
    """
    try:
        return importlib.import_module(name)
    except ImportError:
        return default


def is_importable(name: str) -> bool:
    """
    Return True if the module can be imported (installed and on sys.path).

    Example:
        if is_importable("ujson"):
            import ujson as json
        else:
            import json
    """
    return importlib.util.find_spec(name) is not None


def import_attr(dotted_path: str) -> Any:
    """
    Import a module and return a named attribute, given "module.attr" path.

    Example:
        loads = import_attr("json.loads")
        Path  = import_attr("pathlib.Path")
    """
    module_path, _, attr = dotted_path.rpartition(".")
    if not module_path:
        raise ValueError(f"No module in dotted path: {dotted_path!r}")
    mod = importlib.import_module(module_path)
    return getattr(mod, attr)


def reload_module(module: ModuleType) -> ModuleType:
    """
    Reload an already-imported module to pick up source changes.

    Example:
        import myconfig
        # ... edit myconfig.py ...
        myconfig = reload_module(myconfig)
    """
    return importlib.reload(module)


# ─────────────────────────────────────────────────────────────────────────────
# 2. Load module from arbitrary file path
# ─────────────────────────────────────────────────────────────────────────────

def load_from_file(
    path: str | Path,
    module_name: str | None = None,
    add_to_sys_modules: bool = False,
) -> ModuleType:
    """
    Load a Python file as a module without it being on sys.path.

    module_name: defaults to the file stem.
    add_to_sys_modules: register in sys.modules under module_name.

    Example:
        plugin = load_from_file("/plugins/my_plugin.py")
        plugin.run()
    """
    p = Path(path)
    name = module_name or p.stem
    spec = importlib.util.spec_from_file_location(name, str(p))
    if spec is None or spec.loader is None:
        raise ImportError(f"Cannot create spec for {path!r}")
    mod = importlib.util.module_from_spec(spec)
    if add_to_sys_modules:
        sys.modules[name] = mod
    spec.loader.exec_module(mod)  # type: ignore[attr-defined]
    return mod


def load_plugins_from_dir(
    directory: str | Path,
    pattern: str = "*.py",
    exclude: list[str] | None = None,
) -> dict[str, ModuleType]:
    """
    Load all .py files in a directory as plugins.
    Returns {stem: module}.

    Example:
        plugins = load_plugins_from_dir("plugins/")
        for name, mod in plugins.items():
            if hasattr(mod, "run"):
                mod.run()
    """
    excluded = set(exclude or []) | {"__init__"}
    result: dict[str, ModuleType] = {}
    for p in sorted(Path(directory).glob(pattern)):
        if p.stem not in excluded:
            try:
                result[p.stem] = load_from_file(p)
            except Exception as e:
                result[p.stem + "_ERROR"] = e  # type: ignore[assignment]
    return result


# ─────────────────────────────────────────────────────────────────────────────
# 3. Package resource access
# ─────────────────────────────────────────────────────────────────────────────

def read_package_resource(package: str, resource: str, encoding: str = "utf-8") -> str:
    """
    Read a text resource bundled inside a Python package (works in wheels/zips).

    Example:
        # mypackage/data/config.json exists
        content = read_package_resource("mypackage", "data/config.json")
    """
    ref = importlib.resources.files(package).joinpath(resource)
    return ref.read_text(encoding=encoding)


def read_package_resource_bytes(package: str, resource: str) -> bytes:
    """Read a binary resource from a package."""
    ref = importlib.resources.files(package).joinpath(resource)
    return ref.read_bytes()


def list_package_resources(package: str) -> list[str]:
    """
    List all resource files under a package (non-recursive).

    Example:
        files = list_package_resources("mypackage")
    """
    result: list[str] = []
    pkg_ref = importlib.resources.files(package)
    try:
        for item in pkg_ref.iterdir():
            result.append(item.name)
    except (NotADirectoryError, AttributeError):
        pass
    return sorted(result)


# ─────────────────────────────────────────────────────────────────────────────
# 4. Package metadata
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class PackageInfo:
    name:           str
    version:        str
    summary:        str
    requires_python: str
    homepage:       str
    requires:       list[str]
    entry_points:   dict[str, list[str]]   # {group: [name=value, ...]}

    def __str__(self) -> str:
        return (f"{self.name}=={self.version}  "
                f"py>={self.requires_python or '?'}  "
                f"{self.summary[:60]}")


def package_info(name: str) -> PackageInfo:
    """
    Return metadata for an installed package.

    Example:
        info = package_info("requests")
        print(info)
    """
    meta = importlib.metadata.metadata(name)
    eps: dict[str, list[str]] = {}
    for ep in importlib.metadata.entry_points().get(name, []):
        eps.setdefault(ep.group, []).append(f"{ep.name}={ep.value}")

    all_eps = importlib.metadata.entry_points()
    ep_dict: dict[str, list[str]] = {}
    if hasattr(all_eps, "select"):
        pass  # 3.12+ API
    else:
        for group, entries in all_eps.items():
            pkg_distros = importlib.metadata.packages_distributions()
            for ep in entries:
                dist = ep.value.split(".")[0]
                if name.lower().replace("-", "_") in [
                    (d or "").lower().replace("-", "_")
                    for d in pkg_distros.get(dist, [])
                ]:
                    ep_dict.setdefault(group, []).append(f"{ep.name}={ep.value}")

    try:
        reqs = importlib.metadata.requires(name) or []
    except Exception:
        reqs = []

    return PackageInfo(
        name=meta.get("Name", name),
        version=meta.get("Version", "?"),
        summary=meta.get("Summary", ""),
        requires_python=meta.get("Requires-Python", ""),
        homepage=meta.get("Home-page", ""),
        requires=reqs,
        entry_points=ep_dict,
    )


def installed_version(package: str) -> str | None:
    """
    Return the installed version string for a package, or None.

    Example:
        ver = installed_version("requests")   # e.g. "2.31.0"
    """
    try:
        return importlib.metadata.version(package)
    except importlib.metadata.PackageNotFoundError:
        return None


def check_versions(requirements: dict[str, str]) -> dict[str, tuple[str | None, bool]]:
    """
    Check installed versions against minimum requirements.
    Returns {package: (installed_version, meets_requirement)}.

    Example:
        check_versions({"requests": "2.28", "flask": "2.0"})
    """
    from packaging.version import Version  # type: ignore[import]
    result: dict[str, tuple[str | None, bool]] = {}
    for pkg, min_ver in requirements.items():
        installed = installed_version(pkg)
        try:
            ok = installed is not None and Version(installed) >= Version(min_ver)
        except Exception:
            ok = False
        result[pkg] = (installed, ok)
    return result


# ─────────────────────────────────────────────────────────────────────────────
# 5. Plugin registry pattern
# ─────────────────────────────────────────────────────────────────────────────

class PluginRegistry:
    """
    Simple plugin registry: register and discover callables by entry_points group
    or by loading all .py files from a plugins directory.

    Example:
        reg = PluginRegistry("myapp.plugins")
        reg.discover_entry_points()     # load installed plugins
        reg.register("custom", my_fn)  # manual registration
        for name, fn in reg.items():
            fn()
    """

    def __init__(self, group: str = "") -> None:
        self._group = group
        self._plugins: dict[str, Any] = {}

    def register(self, name: str, obj: Any) -> None:
        self._plugins[name] = obj

    def discover_entry_points(self) -> list[str]:
        """Load plugins from the configured entry_points group."""
        loaded: list[str] = []
        try:
            eps = importlib.metadata.entry_points(group=self._group)
        except TypeError:
            eps = importlib.metadata.entry_points().get(self._group, [])
        for ep in eps:
            try:
                self._plugins[ep.name] = ep.load()
                loaded.append(ep.name)
            except Exception:
                pass
        return loaded

    def discover_directory(self, path: str | Path) -> list[str]:
        """Load plugins from a directory of .py files."""
        mods = load_plugins_from_dir(path)
        loaded: list[str] = []
        for name, mod in mods.items():
            if isinstance(mod, ModuleType) and hasattr(mod, "plugin"):
                self._plugins[name] = mod.plugin
                loaded.append(name)
        return loaded

    def get(self, name: str) -> Any | None:
        return self._plugins.get(name)

    def items(self):
        return self._plugins.items()

    def names(self) -> list[str]:
        return sorted(self._plugins)


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    import tempfile

    print("=== importlib demo ===")

    # ── dynamic import ─────────────────────────────────────────────────────────
    print("\n--- import_module / try_import ---")
    json_mod = import_module("json")
    print(f"  json.dumps([1,2,3]) = {json_mod.dumps([1, 2, 3])}")
    numpy = try_import("numpy")
    print(f"  numpy importable: {numpy is not None}")
    print(f"  is_importable('json'): {is_importable('json')}")
    print(f"  is_importable('__nonexistent__'): {is_importable('__nonexistent__')}")

    # ── import_attr ────────────────────────────────────────────────────────────
    print("\n--- import_attr ---")
    loads = import_attr("json.loads")
    print(f"  json.loads = {loads!r}")
    print(f"  json.loads('[1,2]') = {loads('[1,2]')}")

    # ── load_from_file ─────────────────────────────────────────────────────────
    print("\n--- load_from_file ---")
    with tempfile.TemporaryDirectory() as tmpdir:
        plugin_path = Path(tmpdir) / "hello_plugin.py"
        plugin_path.write_text(
            "plugin_name = 'hello'\n"
            "def run(n=1): return 'hello ' * n\n"
        )
        plugin = load_from_file(plugin_path)
        print(f"  plugin.plugin_name = {plugin.plugin_name!r}")
        print(f"  plugin.run(3) = {plugin.run(3)!r}")

        # load_plugins_from_dir
        (Path(tmpdir) / "plugin_a.py").write_text("plugin = lambda: 'A'")
        (Path(tmpdir) / "plugin_b.py").write_text("plugin = lambda: 'B'")
        plugins = load_plugins_from_dir(tmpdir, exclude=["hello_plugin"])
        for name_, mod in plugins.items():
            print(f"  plugins/{name_}: {type(mod)}")

    # ── metadata ───────────────────────────────────────────────────────────────
    print("\n--- installed_version ---")
    for pkg in ["pip", "setuptools", "requests", "__nonexistent__"]:
        ver = installed_version(pkg)
        print(f"  {pkg:20s}: {ver!r}")

    print("\n=== done ===")

For the pkgutil alternative — pkgutil (stdlib) provides pkgutil.iter_modules() for discovering sub-packages and pkgutil.get_data() for reading package resources using the legacy pre-3.9 resource API — use pkgutil.iter_modules() when you need to enumerate all submodules of a package for auto-discovery; use importlib.resources.files() instead of pkgutil.get_data() for resource access, because files() returns a Traversable that works correctly in wheels and zip-imported packages where the resource is not a real filesystem file. For the sys.modules / importlib.machinery alternative — sys.modules is the module cache that importlib.import_module consults before loading; you can inject mock modules by assigning sys.modules["name"] = mock_obj, which is useful in tests; importlib.machinery.SourceFileLoader and importlib.machinery.FileFinder are the lower-level loader classes that handle .py file discovery and compilation — use these when building custom import hooks or meta-path finders; use importlib.util.spec_from_file_location for the cleaner high-level API when loading .py files dynamically. The Claude Skills 360 bundle includes importlib skill sets covering import_module()/try_import()/is_importable()/import_attr()/reload_module() dynamic import helpers, load_from_file()/load_plugins_from_dir() file-based loaders, read_package_resource()/read_package_resource_bytes()/list_package_resources() bundled resource readers, PackageInfo dataclass with package_info()/installed_version()/check_versions() metadata tools, and PluginRegistry entry-points plugin loader. Start with the free tier to try dynamic import patterns and importlib pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free