Claude Code for urllib.response: Python HTTP Response Objects — Claude Skills 360 Blog
Blog / AI / Claude Code for urllib.response: Python HTTP Response Objects
AI

Claude Code for urllib.response: Python HTTP Response Objects

Published: January 29, 2029
Read time: 5 min read
By: Claude Skills 360

Python’s urllib.response module defines the response objects returned by urllib.request.urlopen(). from urllib import response. The main class is addinfourl — a file-like wrapper around the raw socket with extra HTTP metadata. It is rarely constructed directly; instead you receive it from urlopen(). Key interface: resp.read(amt=-1)bytes (all or amt bytes); resp.readline()bytes; resp.readlines()list[bytes]; resp.info()http.client.HTTPMessage (response headers); resp.geturl()str (final URL after redirects); resp.getcode()int (HTTP status code); resp.status (alias for getcode(), Python 3.9+); resp.url (alias for geturl()). Header shortcut: resp.info()["content-type"]. Context manager: with urllib.request.urlopen(url) as resp:. The addinfo mixin (base class) adds just info(); addinfourl adds geturl() and getcode() on top. Claude Code generates HTTP response readers, header extractors, content-type detectors, redirect tracers, and streaming download pipelines.

CLAUDE.md for urllib.response

## urllib.response Stack
- Stdlib: from urllib import request, response
- Open:   resp = request.urlopen(url)   # returns addinfourl
- Read:   resp.read() / resp.read(n) / resp.readline() / resp.readlines()
- Meta:   resp.info()           # http.client.HTTPMessage (headers)
-         resp.geturl()         # final URL (post-redirect)
-         resp.getcode()        # int status code
-         resp.status / resp.url  # aliases (3.9+)
- Header: resp.info()["content-type"]
- CM:     with request.urlopen(url) as resp: data = resp.read()

urllib.response HTTP Response Pipeline

# app/urllibresponseutil.py — read, headers, redirect, chunked, inspect, mock
from __future__ import annotations

import io
import json
import urllib.request
import urllib.error
from dataclasses import dataclass, field
from typing import Any
from http.client import HTTPMessage


# ─────────────────────────────────────────────────────────────────────────────
# 1. Response metadata helpers
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class ResponseInfo:
    url:          str
    final_url:    str
    status:       int
    content_type: str
    content_len:  int | None   # None if header absent
    encoding:     str
    headers:      dict[str, str]
    redirected:   bool


def inspect_response(resp: Any) -> ResponseInfo:
    """
    Extract metadata from a urllib addinfourl response object.

    Example:
        with urllib.request.urlopen("https://httpbin.org/get") as resp:
            info = inspect_response(resp)
            print(info.status, info.content_type)
    """
    headers_obj: HTTPMessage = resp.info()
    raw_headers = {k: v for k, v in headers_obj.items()}

    ct = headers_obj.get("content-type", "")
    final_url = resp.geturl()
    # Determine charset from Content-Type
    enc = "utf-8"
    for part in ct.split(";"):
        part = part.strip()
        if part.lower().startswith("charset="):
            enc = part[8:].strip().strip('"')
            break

    # content-length
    cl_str = headers_obj.get("content-length")
    cl = int(cl_str) if cl_str and cl_str.isdigit() else None

    return ResponseInfo(
        url=getattr(resp, "url", final_url),
        final_url=final_url,
        status=resp.getcode(),
        content_type=ct,
        content_len=cl,
        encoding=enc,
        headers=raw_headers,
        redirected=getattr(resp, "url", final_url) != final_url,
    )


def response_headers(resp: Any) -> dict[str, str]:
    """
    Return all response headers as a plain dict.

    Example:
        with urllib.request.urlopen(url) as resp:
            hdrs = response_headers(resp)
            print(hdrs.get("server"))
    """
    return {k: v for k, v in resp.info().items()}


# ─────────────────────────────────────────────────────────────────────────────
# 2. Safe fetch helpers
# ─────────────────────────────────────────────────────────────────────────────

@dataclass
class FetchResult:
    url:      str
    status:   int
    body:     bytes
    headers:  dict[str, str]
    error:    str = ""

    @property
    def ok(self) -> bool:
        return 200 <= self.status < 300

    def text(self, encoding: str = "utf-8",
             errors: str = "replace") -> str:
        return self.body.decode(encoding, errors=errors)

    def json(self) -> Any:
        return json.loads(self.body)


def safe_fetch(url: str, timeout: float = 10.0,
               headers: dict[str, str] | None = None) -> FetchResult:
    """
    Fetch a URL and return a FetchResult.  Never raises; errors go to .error.

    Example:
        r = safe_fetch("https://httpbin.org/get")
        if r.ok:
            print(r.json())
    """
    req = urllib.request.Request(url, headers=headers or {})
    try:
        with urllib.request.urlopen(req, timeout=timeout) as resp:
            body = resp.read()
            return FetchResult(
                url=resp.geturl(),
                status=resp.getcode(),
                body=body,
                headers={k: v for k, v in resp.info().items()},
            )
    except urllib.error.HTTPError as e:
        body = b""
        try:
            body = e.read()
        except Exception:
            pass
        return FetchResult(url=url, status=e.code, body=body,
                           headers={}, error=str(e))
    except Exception as e:
        return FetchResult(url=url, status=0, body=b"",
                           headers={}, error=str(e))


# ─────────────────────────────────────────────────────────────────────────────
# 3. Streaming download with progress
# ─────────────────────────────────────────────────────────────────────────────

def download_stream(url: str,
                    dest: "str | io.RawIOBase",
                    chunk_size: int = 65536,
                    timeout: float = 30.0,
                    on_progress: "Any | None" = None) -> int:
    """
    Stream a URL to a file path or file-like object.
    Calls on_progress(bytes_downloaded, total_or_None) each chunk.
    Returns total bytes written.

    Example:
        n = download_stream(
            "https://example.com/file.gz",
            "/tmp/file.gz",
            on_progress=lambda n, t: print(f"{n}/{t}"),
        )
        print(f"downloaded {n} bytes")
    """
    import os

    req = urllib.request.Request(url)
    total_written = 0
    close_after = False

    if isinstance(dest, str):
        outfile: io.RawIOBase = open(dest, "wb")
        close_after = True
    else:
        outfile = dest

    try:
        with urllib.request.urlopen(req, timeout=timeout) as resp:
            cl_str = resp.info().get("content-length")
            total = int(cl_str) if cl_str and cl_str.isdigit() else None
            while True:
                chunk = resp.read(chunk_size)
                if not chunk:
                    break
                outfile.write(chunk)
                total_written += len(chunk)
                if on_progress:
                    on_progress(total_written, total)
    finally:
        if close_after:
            outfile.close()

    return total_written


# ─────────────────────────────────────────────────────────────────────────────
# 4. Content-type and encoding detector
# ─────────────────────────────────────────────────────────────────────────────

def detect_content_type(resp: Any) -> tuple[str, str]:
    """
    Extract mime type and charset from a response's Content-Type header.
    Returns (mime_type, charset).

    Example:
        with urllib.request.urlopen(url) as resp:
            mime, charset = detect_content_type(resp)
            print(mime, charset)   # "application/json", "utf-8"
    """
    ct = resp.info().get("content-type", "")
    mime = ct.split(";")[0].strip()
    charset = "utf-8"
    for part in ct.split(";")[1:]:
        part = part.strip()
        if part.lower().startswith("charset="):
            charset = part[8:].strip().strip('"')
            break
    return mime, charset


# ─────────────────────────────────────────────────────────────────────────────
# 5. Redirect tracer
# ─────────────────────────────────────────────────────────────────────────────

class RedirectRecorder(urllib.request.HTTPRedirectHandler):
    """
    An HTTPRedirectHandler that records each redirect URL.

    Example:
        recorder = RedirectRecorder()
        opener = urllib.request.build_opener(recorder)
        with opener.open("https://httpbin.org/redirect/2") as resp:
            print(recorder.redirect_chain)
    """

    def __init__(self) -> None:
        self.redirect_chain: list[str] = []

    def redirect_request(self, req, fp, code, msg, hdrs, newurl):
        self.redirect_chain.append(newurl)
        return super().redirect_request(req, fp, code, msg, hdrs, newurl)


def trace_redirects(url: str, timeout: float = 10.0) -> dict[str, Any]:
    """
    Follow a URL and record the redirect chain.
    Returns {"start": url, "chain": [...], "final": url, "status": code}.

    Example:
        info = trace_redirects("https://httpbin.org/redirect/2")
        print(info["chain"])
    """
    recorder = RedirectRecorder()
    opener = urllib.request.build_opener(recorder)
    try:
        with opener.open(url, timeout=timeout) as resp:
            return {
                "start":  url,
                "chain":  recorder.redirect_chain,
                "final":  resp.geturl(),
                "status": resp.getcode(),
            }
    except Exception as e:
        return {
            "start":  url,
            "chain":  recorder.redirect_chain,
            "final":  "",
            "status": 0,
            "error":  str(e),
        }


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("=== urllib.response demo ===")

    # Use a reliable, small public URL for demo
    test_url = "https://httpbin.org/get"

    # ── safe_fetch ────────────────────────────────────────────────────────────
    print("\n--- safe_fetch ---")
    r = safe_fetch(test_url, timeout=5.0)
    print(f"  status : {r.status}")
    print(f"  ok     : {r.ok}")
    print(f"  url    : {r.url[:60]}")
    print(f"  bytes  : {len(r.body)}")
    if r.ok:
        print(f"  ct     : {r.headers.get('content-type', '')!r}")

    # ── inspect_response ──────────────────────────────────────────────────────
    print("\n--- inspect_response ---")
    try:
        with urllib.request.urlopen(test_url, timeout=5) as resp:
            info = inspect_response(resp)
            print(f"  status      : {info.status}")
            print(f"  content_type: {info.content_type!r}")
            print(f"  encoding    : {info.encoding!r}")
            print(f"  content_len : {info.content_len}")
            print(f"  final_url   : {info.final_url[:60]}")
    except Exception as e:
        print(f"  (network unavailable: {e})")

    # ── fetch + json ──────────────────────────────────────────────────────────
    print("\n--- fetch + json ---")
    r = safe_fetch("https://httpbin.org/json", timeout=5.0)
    if r.ok:
        data = r.json()
        print(f"  json keys: {list(data.keys())[:5]}")
    else:
        print(f"  error: {r.error or r.status}")

    # ── detect_content_type ───────────────────────────────────────────────────
    print("\n--- detect_content_type ---")
    try:
        with urllib.request.urlopen("https://httpbin.org/json", timeout=5) as resp:
            mime, charset = detect_content_type(resp)
            print(f"  mime={mime!r}  charset={charset!r}")
    except Exception as e:
        print(f"  (network: {e})")

    print("\n=== done ===")

For the requests (PyPI) alternative — requests.get(url) returns a Response with .text, .json(), .content, .headers, .status_code, .url, and auto-redirect following — use requests for all production HTTP client work; its API is more ergonomic, handles streaming, sessions, auth, retries, and TLS verification with sensible defaults. For the httpx (PyPI) alternative — httpx.get(url) provides an identical requests-compatible API plus native async with httpx.AsyncClient() for asyncio — use httpx when you need both sync and async HTTP, HTTP/2, or advanced connection pooling; use urllib.response/urllib.request for zero-dependency scripts and tools. The Claude Skills 360 bundle includes urllib.response skill sets covering ResponseInfo/inspect_response() metadata extractor, response_headers() header dict converter, FetchResult/safe_fetch() safe wrapper, download_stream() progress-tracking download, detect_content_type() MIME/charset detector, and RedirectRecorder/trace_redirects() redirect tracer. Start with the free tier to try HTTP response patterns and urllib.response pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free