Claude Code for psutil: System and Process Monitoring in Python — Claude Skills 360 Blog
Blog / AI / Claude Code for psutil: System and Process Monitoring in Python
AI

Claude Code for psutil: System and Process Monitoring in Python

Published: January 20, 2028
Read time: 5 min read
By: Claude Skills 360

psutil reads system and process metrics in Python. pip install psutil. CPU: import psutil; psutil.cpu_percent(interval=1) — 1s sample. psutil.cpu_percent(percpu=True) — per-core. psutil.cpu_count() — logical cores. psutil.cpu_count(logical=False) — physical. psutil.cpu_freq().current — MHz. psutil.cpu_times() — user/system/idle. Memory: mem = psutil.virtual_memory(); mem.percent; mem.available; mem.total; mem.used. swap = psutil.swap_memory(); swap.percent. Disk: disk = psutil.disk_usage("/"); disk.percent; disk.free; disk.total. psutil.disk_io_counters() — read_bytes/write_bytes. psutil.disk_partitions() — mounted filesystems. Network: net = psutil.net_io_counters(); net.bytes_sent; net.bytes_recv; net.packets_sent. psutil.net_connections() — open TCP/UDP connections. psutil.net_if_stats() — NIC speed/duplex. Process: p = psutil.Process(pid). p.cpu_percent(interval=0.1). p.memory_info().rss. p.memory_percent(). p.status() → “running”/“sleeping”. p.open_files(). p.connections(). p.name(). p.cmdline(). p.create_time(). p.num_threads(). Self: p = psutil.Process() — current process. psutil.process_iter(["pid","name","cpu_percent","memory_percent"]) — all processes. psutil.pid_exists(pid). Kill: p.terminate(), p.kill(), p.send_signal(signal.SIGTERM). Battery: psutil.sensors_battery().percent. Temps: psutil.sensors_temperatures(). Fans: psutil.sensors_fans(). Boot: psutil.boot_time(). Users: psutil.users(). Claude Code generates psutil health-check functions, process monitors, and resource alert pipelines.

CLAUDE.md for psutil

## psutil Stack
- Version: psutil >= 6.0 | pip install psutil
- CPU: psutil.cpu_percent(interval=1) | cpu_count() | cpu_freq()
- Memory: psutil.virtual_memory().percent | .available | .total
- Disk: psutil.disk_usage("/").percent | disk_io_counters()
- Network: psutil.net_io_counters().bytes_recv | net_connections()
- Process: psutil.Process(pid) | .cpu_percent() | .memory_info().rss
- Self: psutil.Process() — current process, no pid needed

psutil System Monitoring Pipeline

# app/system_health.py — psutil metrics collection and alerting
from __future__ import annotations

import os
import time
from dataclasses import dataclass, field
from datetime import datetime, timezone
from typing import Optional

import psutil


# ─────────────────────────────────────────────────────────────────────────────
# Metrics data classes
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class CpuMetrics:
    percent:       float        # overall utilization %
    per_core:      list[float]  # per-logical-core utilization
    count_logical: int
    count_physical: int
    freq_mhz:      Optional[float]
    load_avg_1m:   float        # 1-minute load average


@dataclass
class MemoryMetrics:
    total_gb:    float
    used_gb:     float
    available_gb: float
    percent:     float
    swap_percent: float
    swap_used_gb: float


@dataclass
class DiskMetrics:
    path:       str
    total_gb:   float
    used_gb:    float
    free_gb:    float
    percent:    float
    read_mb:    float   # cumulative MB read since boot
    write_mb:   float   # cumulative MB written since boot


@dataclass
class NetworkMetrics:
    bytes_sent_mb:   float
    bytes_recv_mb:   float
    packets_sent:    int
    packets_recv:    int
    errors_in:       int
    errors_out:      int
    tcp_connections: int


@dataclass
class ProcessMetrics:
    pid:             int
    name:            str
    status:          str
    cpu_percent:     float
    memory_rss_mb:   float
    memory_percent:  float
    num_threads:     int
    open_files:      int
    create_time:     datetime


@dataclass
class SystemSnapshot:
    timestamp: datetime
    cpu:       CpuMetrics
    memory:    MemoryMetrics
    disks:     list[DiskMetrics]
    network:   NetworkMetrics
    self_proc: ProcessMetrics   # metrics for the current process


# ─────────────────────────────────────────────────────────────────────────────
# Collectors
# ─────────────────────────────────────────────────────────────────────────────

def collect_cpu(interval: float = 0.5) -> CpuMetrics:
    """
    cpu_percent(interval=N) blocks for N seconds to compute utilization.
    interval=None returns utilization since the last call (non-blocking).
    """
    freq = psutil.cpu_freq()
    la   = os.getloadavg()   # Unix only — 1, 5, 15 min load averages
    return CpuMetrics(
        percent=psutil.cpu_percent(interval=interval),
        per_core=psutil.cpu_percent(interval=None, percpu=True),
        count_logical=psutil.cpu_count(),
        count_physical=psutil.cpu_count(logical=False) or 1,
        freq_mhz=freq.current if freq else None,
        load_avg_1m=la[0],
    )


def collect_memory() -> MemoryMetrics:
    vm   = psutil.virtual_memory()
    swap = psutil.swap_memory()
    GB   = 1024 ** 3
    return MemoryMetrics(
        total_gb=round(vm.total / GB, 2),
        used_gb=round(vm.used / GB, 2),
        available_gb=round(vm.available / GB, 2),
        percent=vm.percent,
        swap_percent=swap.percent,
        swap_used_gb=round(swap.used / GB, 2),
    )


def collect_disks(paths: list[str] | None = None) -> list[DiskMetrics]:
    """Collect disk metrics for the given paths (default: all mounted partitions)."""
    if paths is None:
        paths = [p.mountpoint for p in psutil.disk_partitions(all=False)]

    io = psutil.disk_io_counters(perdisk=False)
    GB = 1024 ** 3
    MB = 1024 ** 2
    metrics: list[DiskMetrics] = []
    for path in paths:
        try:
            usage = psutil.disk_usage(path)
            metrics.append(DiskMetrics(
                path=path,
                total_gb=round(usage.total / GB, 2),
                used_gb=round(usage.used / GB, 2),
                free_gb=round(usage.free / GB, 2),
                percent=usage.percent,
                read_mb=round(io.read_bytes / MB, 1) if io else 0.0,
                write_mb=round(io.write_bytes / MB, 1) if io else 0.0,
            ))
        except (PermissionError, FileNotFoundError):
            continue
    return metrics


def collect_network() -> NetworkMetrics:
    net = psutil.net_io_counters()
    MB  = 1024 ** 2
    try:
        tcp = sum(1 for c in psutil.net_connections(kind="tcp")
                  if c.status == "ESTABLISHED")
    except (psutil.AccessDenied, PermissionError):
        tcp = -1
    return NetworkMetrics(
        bytes_sent_mb=round(net.bytes_sent / MB, 2),
        bytes_recv_mb=round(net.bytes_recv / MB, 2),
        packets_sent=net.packets_sent,
        packets_recv=net.packets_recv,
        errors_in=net.errin,
        errors_out=net.errout,
        tcp_connections=tcp,
    )


def collect_process(pid: int | None = None) -> ProcessMetrics:
    """
    Collect metrics for a specific process.
    pid=None → current process (psutil.Process() uses os.getpid()).
    """
    p = psutil.Process(pid)
    MB = 1024 ** 2
    try:
        open_files = len(p.open_files())
    except (psutil.AccessDenied, PermissionError):
        open_files = -1
    return ProcessMetrics(
        pid=p.pid,
        name=p.name(),
        status=p.status(),
        cpu_percent=p.cpu_percent(interval=0.1),
        memory_rss_mb=round(p.memory_info().rss / MB, 2),
        memory_percent=round(p.memory_percent(), 2),
        num_threads=p.num_threads(),
        open_files=open_files,
        create_time=datetime.fromtimestamp(p.create_time(), tz=timezone.utc),
    )


def snapshot() -> SystemSnapshot:
    """Collect all metrics in a single snapshot."""
    return SystemSnapshot(
        timestamp=datetime.now(timezone.utc),
        cpu=collect_cpu(interval=0.2),
        memory=collect_memory(),
        disks=collect_disks(["/", "/tmp"]),
        network=collect_network(),
        self_proc=collect_process(),
    )


# ─────────────────────────────────────────────────────────────────────────────
# Health check — for /health endpoints
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class HealthStatus:
    healthy:  bool
    warnings: list[str]
    errors:   list[str]

    def to_dict(self) -> dict:
        return {
            "status":   "healthy" if self.healthy else "unhealthy",
            "warnings": self.warnings,
            "errors":   self.errors,
        }


def check_health(
    cpu_warn:    float = 80.0,
    cpu_crit:    float = 95.0,
    mem_warn:    float = 80.0,
    mem_crit:    float = 95.0,
    disk_warn:   float = 80.0,
    disk_crit:   float = 95.0,
) -> HealthStatus:
    warnings: list[str] = []
    errors:   list[str] = []

    try:
        cpu_pct = psutil.cpu_percent(interval=0.5)
        if cpu_pct >= cpu_crit:
            errors.append(f"CPU critical: {cpu_pct:.1f}%")
        elif cpu_pct >= cpu_warn:
            warnings.append(f"CPU high: {cpu_pct:.1f}%")

        mem = psutil.virtual_memory()
        if mem.percent >= mem_crit:
            errors.append(f"Memory critical: {mem.percent:.1f}%")
        elif mem.percent >= mem_warn:
            warnings.append(f"Memory high: {mem.percent:.1f}%")

        for part in psutil.disk_partitions(all=False):
            try:
                usage = psutil.disk_usage(part.mountpoint)
                if usage.percent >= disk_crit:
                    errors.append(f"Disk {part.mountpoint} critical: {usage.percent:.1f}%")
                elif usage.percent >= disk_warn:
                    warnings.append(f"Disk {part.mountpoint} high: {usage.percent:.1f}%")
            except (PermissionError, FileNotFoundError):
                continue

    except Exception as exc:
        errors.append(f"Metrics collection failed: {exc}")

    return HealthStatus(
        healthy=len(errors) == 0,
        warnings=warnings,
        errors=errors,
    )


# ─────────────────────────────────────────────────────────────────────────────
# Process finder
# ─────────────────────────────────────────────────────────────────────────────

def find_processes_by_name(name: str) -> list[dict]:
    """Return all running processes whose name contains `name` (case-insensitive)."""
    results = []
    name_lower = name.lower()
    for proc in psutil.process_iter(["pid", "name", "cpu_percent", "memory_percent", "status"]):
        try:
            if name_lower in (proc.info["name"] or "").lower():
                results.append(proc.info)
        except (psutil.NoSuchProcess, psutil.AccessDenied):
            continue
    return results


def top_processes(n: int = 10) -> list[dict]:
    """Return top N processes by memory usage."""
    procs = []
    for proc in psutil.process_iter(["pid", "name", "cpu_percent", "memory_percent"]):
        try:
            procs.append(proc.info)
        except (psutil.NoSuchProcess, psutil.AccessDenied):
            continue
    return sorted(procs, key=lambda p: p.get("memory_percent") or 0, reverse=True)[:n]


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    s = snapshot()
    print(f"=== System Snapshot @ {s.timestamp.isoformat()} ===")
    print(f"CPU:    {s.cpu.percent:.1f}% ({s.cpu.count_logical} logical / "
          f"{s.cpu.count_physical} physical cores, load: {s.cpu.load_avg_1m:.2f})")
    print(f"Memory: {s.memory.percent:.1f}% used "
          f"({s.memory.used_gb:.1f} / {s.memory.total_gb:.1f} GB)")
    for d in s.disks:
        print(f"Disk {d.path}: {d.percent:.1f}% ({d.free_gb:.1f} GB free)")
    print(f"Net:    sent={s.network.bytes_sent_mb:.1f} MB "
          f"recv={s.network.bytes_recv_mb:.1f} MB "
          f"tcp={s.network.tcp_connections}")
    print(f"Self:   pid={s.self_proc.pid} mem={s.self_proc.memory_rss_mb:.1f} MB "
          f"threads={s.self_proc.num_threads}")

    print("\n=== Health Check ===")
    health = check_health()
    print(f"Status:   {health.to_dict()['status']}")
    if health.warnings:
        print(f"Warnings: {health.warnings}")
    if health.errors:
        print(f"Errors:   {health.errors}")

    print("\n=== Top 3 Processes by Memory ===")
    for p in top_processes(3):
        print(f"  pid={p['pid']} name={p['name']} mem={p.get('memory_percent',0):.1f}%")

For the os.popen("free -h") shell alternative — calling subprocess.run(["free", "-h"]) or reading /proc/meminfo manually returns unstructured text that varies across Linux distributions and is unavailable on macOS and Windows, while psutil.virtual_memory().percent works identically on Linux, macOS, and Windows in a single line, returns typed Python objects, and raises psutil.AccessDenied instead of silently failing when permissions are insufficient. For the prometheus_client alternative — prometheus_client exports metrics for Prometheus scraping but requires a running scrape target and cannot be used to query system state from within Python code, while psutil.cpu_percent() and psutil.virtual_memory() return current values immediately — use psutil to collect metrics and feed them into prometheus_client.Gauge.set() to expose them to Prometheus. The Claude Skills 360 bundle includes psutil skill sets covering cpu_percent with interval, per-core and load average, virtual_memory and swap_memory, disk_usage and disk_io_counters across partitions, net_io_counters and net_connections, Process.cpu_percent and memory_info.rss, process_iter for enumeration, find_processes_by_name, health check thresholds, self-monitoring with psutil.Process(), and FastAPI /health endpoint integration. Start with the free tier to try system monitoring code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free