Claude Code for gevent: Coroutine-Based Concurrency for Python — Claude Skills 360 Blog
Blog / AI / Claude Code for gevent: Coroutine-Based Concurrency for Python
AI

Claude Code for gevent: Coroutine-Based Concurrency for Python

Published: May 24, 2028
Read time: 5 min read
By: Claude Skills 360

gevent provides cooperative multitasking via greenlets and monkey-patching. pip install gevent. Monkey-patch: from gevent import monkey; monkey.patch_all() — call at module top before other imports. Spawn: from gevent import spawn; g = spawn(my_fn, arg). Join: g.join(). JoinAll: from gevent import joinall; joinall([spawn(fn, x) for x in items]). Pool: from gevent.pool import Pool; p = Pool(10); results = p.map(fn, items). Queue: from gevent.queue import Queue; q = Queue(); q.put(item); item = q.get(). Event: from gevent.event import Event; e = Event(); e.set(); e.wait(). AsyncResult: ar = AsyncResult(); ar.set(value); ar.get(). Timeout: from gevent import Timeout; with Timeout(5): slow_fn(). Sleep: from gevent import sleep; sleep(1). Greenlet: from gevent import Greenlet; g = Greenlet(fn, args); g.start(); g.join(). g.value — result after completion. g.exception — unhandled exception. Hub: from gevent import get_hub; hub = get_hub(). WSGI: from gevent.pywsgi import WSGIServer; WSGIServer(("0.0.0.0",8000), app).serve_forever(). Requests: after monkey.patch_all() stdlib urllib, socket, and requests become non-blocking. gevent.wait(objects, count=N). gevent.iwait(objects) — iterator. Claude Code generates gevent parallel scrapers, connection pools, producer-consumer pipelines, and cooperative I/O workers.

CLAUDE.md for gevent

## gevent Stack
- Version: gevent >= 23.0 | pip install gevent
- Patch: from gevent import monkey; monkey.patch_all()  # FIRST import in main.py
- Spawn: from gevent import spawn, joinall; joinall([spawn(fn, x) for x in items])
- Pool: from gevent.pool import Pool; Pool(20).map(fn, items)
- Queue: from gevent.queue import Queue; q.put(x); q.get()
- Timeout: from gevent import Timeout; with Timeout(5.0): blocking_call()

gevent Cooperative Concurrency Pipeline

# app/concurrent.py — gevent spawn, Pool, Queue, producer-consumer, HTTP, and WSGI
from __future__ import annotations

# IMPORTANT: monkey-patch FIRST, before all other standard library imports
from gevent import monkey
monkey.patch_all()

import logging
import time
import queue as _stdlib_queue
from typing import Any, Callable, Generator, Iterator

import gevent
import gevent.pool
import gevent.queue
import gevent.event
import gevent.timeout
from gevent import Greenlet, sleep, spawn, joinall
from gevent.event import AsyncResult, Event
from gevent.pool import Pool
from gevent.queue import Empty, Full, JoinableQueue, Queue


log = logging.getLogger(__name__)


# ─────────────────────────────────────────────────────────────────────────────
# 1. Spawn and coordination helpers
# ─────────────────────────────────────────────────────────────────────────────

def run_concurrent(
    fn: Callable,
    items: list,
    concurrency: int = 10,
    timeout: float | None = None,
    raise_errors: bool = True,
) -> list:
    """
    Run fn(item) for each item in items, up to `concurrency` in parallel.
    Returns list of results in original order.
    Errors are re-raised if raise_errors=True, else stored as Exception objects.

    Example:
        urls = ["http://api.example.com/users/1", "http://api.example.com/users/2"]
        responses = run_concurrent(fetch_url, urls, concurrency=20)
    """
    pool    = Pool(concurrency)
    results = [None] * len(items)
    errors  = [None] * len(items)

    def worker(idx: int, item: Any) -> None:
        try:
            results[idx] = fn(item)
        except Exception as e:
            errors[idx] = e

    greenlets = [pool.spawn(worker, i, item) for i, item in enumerate(items)]

    if timeout is not None:
        with gevent.timeout.Timeout(timeout):
            joinall(greenlets)
    else:
        joinall(greenlets)

    if raise_errors:
        for i, err in enumerate(errors):
            if err is not None:
                raise err
    else:
        for i, err in enumerate(errors):
            if err is not None:
                results[i] = err

    return results


def run_with_timeout(
    fn: Callable,
    *args: Any,
    timeout: float = 10.0,
    default: Any = None,
    **kwargs: Any,
) -> Any:
    """
    Run fn(*args, **kwargs) with a timeout.
    Returns fn's result, or `default` if timed out.

    Example:
        data = run_with_timeout(fetch_slow_api, url, timeout=5.0, default={})
    """
    try:
        with gevent.timeout.Timeout(timeout):
            return fn(*args, **kwargs)
    except gevent.timeout.Timeout:
        log.warning("Timeout after %.1fs calling %s", timeout, getattr(fn, "__name__", fn))
        return default


def map_parallel(
    fn: Callable,
    items: list,
    size: int = 10,
    ordered: bool = True,
) -> list:
    """
    Pool.map() equivalent with configurable pool size.
    ordered=True: preserve input order.
    ordered=False: imap_unordered for slightly faster completion with unequal tasks.

    Example:
        prices  = map_parallel(fetch_price, sku_list, size=50)
    """
    pool = Pool(size)
    if ordered:
        return list(pool.map(fn, items))
    else:
        return list(pool.imap_unordered(fn, items))


# ─────────────────────────────────────────────────────────────────────────────
# 2. Producer-consumer queue patterns
# ─────────────────────────────────────────────────────────────────────────────

def producer_consumer(
    producer_fn: Callable[[], Iterator],
    consumer_fn: Callable[[Any], Any],
    n_consumers: int = 5,
    queue_size: int = 100,
    sentinel: Any = None,
) -> list:
    """
    Classic producer-consumer pipeline using gevent Queue.
    producer_fn: generator or callable that yields items.
    consumer_fn: called with each item; results collected.
    sentinel: value placed in queue to signal consumers to stop.

    Example:
        def read_urls(): yield from open("urls.txt")
        def fetch(url): return requests.get(url.strip()).status_code

        results = producer_consumer(read_urls, fetch, n_consumers=20)
    """
    q       = Queue(queue_size)
    results = []

    def producer():
        for item in producer_fn():
            q.put(item)
        for _ in range(n_consumers):
            q.put(sentinel)

    def consumer():
        while True:
            item = q.get()
            if item is sentinel:
                break
            try:
                results.append(consumer_fn(item))
            except Exception as e:
                log.error("Consumer error: %s", e)

    consumers = [spawn(consumer) for _ in range(n_consumers)]
    spawn(producer)
    joinall(consumers)
    return results


class WorkQueue:
    """
    Thread-safe gevent work queue with worker pool.
    Items are put()'d from any greenlet; workers process them concurrently.

    Usage:
        def handle(item):
            data = requests.get(item).json()
            return data["id"]

        wq = WorkQueue(handle, workers=20)
        wq.start()
        for url in urls: wq.put(url)
        results = wq.wait()
        wq.stop()
    """

    def __init__(
        self,
        handler: Callable[[Any], Any],
        workers: int = 10,
        maxsize: int = 0,
    ) -> None:
        self._handler  = handler
        self._workers  = workers
        self._q        = JoinableQueue(maxsize)
        self._results: list = []
        self._pool:    Pool | None = None
        self._greenlets: list = []

    def start(self) -> None:
        self._pool = Pool(self._workers)
        self._greenlets = [self._pool.spawn(self._worker) for _ in range(self._workers)]

    def _worker(self) -> None:
        while True:
            item = self._q.get()
            if item is None:
                self._q.task_done()
                break
            try:
                result = self._handler(item)
                self._results.append(result)
            except Exception as e:
                log.error("Worker error: %s", e)
            finally:
                self._q.task_done()

    def put(self, item: Any, block: bool = True, timeout: float | None = None) -> None:
        self._q.put(item, block=block, timeout=timeout)

    def wait(self, timeout: float | None = None) -> list:
        self._q.join()
        return list(self._results)

    def stop(self) -> None:
        for _ in self._greenlets:
            self._q.put(None)
        joinall(self._greenlets)


# ─────────────────────────────────────────────────────────────────────────────
# 3. Rate-limited fetcher
# ─────────────────────────────────────────────────────────────────────────────

class RateLimitedFetcher:
    """
    Fetch URLs concurrently with a configurable rate limit (requests/second).
    Uses a token bucket via gevent sleep.

    Usage:
        fetcher = RateLimitedFetcher(rps=10, concurrency=20)
        results = fetcher.fetch_all(url_list)
    """

    def __init__(
        self,
        rps: float = 10.0,
        concurrency: int = 20,
        timeout: float = 10.0,
    ) -> None:
        self.rps         = rps
        self.concurrency = concurrency
        self.timeout     = timeout
        self._interval   = 1.0 / rps
        self._lock       = gevent.lock.Semaphore(1)

    def _fetch(self, url: str) -> dict:
        import urllib.request
        with self._lock:
            sleep(self._interval)  # rate-limit token bucket

        try:
            with gevent.timeout.Timeout(self.timeout):
                with urllib.request.urlopen(url) as resp:
                    return {"url": url, "status": resp.status, "body_len": len(resp.read())}
        except Exception as e:
            return {"url": url, "error": str(e)}

    def fetch_all(self, urls: list[str]) -> list[dict]:
        return map_parallel(self._fetch, urls, size=self.concurrency)


# ─────────────────────────────────────────────────────────────────────────────
# 4. WSGI server pattern
# ─────────────────────────────────────────────────────────────────────────────

WSGI_EXAMPLE = '''
# gevent WSGI server — handles concurrent requests via greenlets
from gevent import monkey; monkey.patch_all()
from gevent.pywsgi import WSGIServer

def application(environ, start_response):
    import time, json
    # This sleep is non-blocking — other requests continue concurrently
    time.sleep(0.01)
    body = json.dumps({"status": "ok", "workers": "greenlets"}).encode()
    start_response("200 OK", [
        ("Content-Type", "application/json"),
        ("Content-Length", str(len(body))),
    ])
    return [body]

if __name__ == "__main__":
    print("Serving on http://0.0.0.0:8000")
    WSGIServer(("0.0.0.0", 8000), application, log=None).serve_forever()

# For Flask/Django with gevent:
# WSGIServer(("0.0.0.0", 8000), flask_app).serve_forever()
'''


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("=== run_concurrent ===")
    start = time.perf_counter()

    def slow_task(n: int) -> int:
        sleep(0.05)  # simulate I/O
        return n * n

    results = run_concurrent(slow_task, list(range(20)), concurrency=10)
    elapsed = time.perf_counter() - start
    print(f"  20 tasks (50ms each) in {elapsed*1000:.0f}ms (serial would be 1000ms)")
    print(f"  First 5 results: {results[:5]}")

    print("\n=== producer_consumer ===")
    def gen_items():
        for i in range(10):
            yield i

    def double(x):
        sleep(0.01)
        return x * 2

    pc_results = producer_consumer(gen_items, double, n_consumers=5)
    print(f"  Processed {len(pc_results)} items: {sorted(pc_results)}")

    print("\n=== run_with_timeout ===")
    def sometimes_slow(n):
        sleep(n)
        return f"done after {n}s"

    result = run_with_timeout(sometimes_slow, 0.1, timeout=1.0)
    print(f"  Fast call: {result}")
    result2 = run_with_timeout(sometimes_slow, 5.0, timeout=0.5, default="timed out")
    print(f"  Slow call: {result2}")

    print("\n=== WorkQueue ===")
    processed = []
    def handle(item):
        sleep(0.01)
        return item ** 2

    wq = WorkQueue(handle, workers=5)
    wq.start()
    for i in range(20):
        wq.put(i)
    final = wq.wait()
    wq.stop()
    print(f"  Processed {len(final)} items via WorkQueue")

For the asyncio alternative — asyncio is Python’s built-in async framework using explicit async/await syntax with tasks and event loops; gevent achieves the same cooperative scheduling transparently — existing synchronous code (including requests, socket, stdlib http) becomes concurrent after monkey.patch_all() without changing any of its source — use gevent for quick concurrency wins on codebases that can’t be rewritten to async/await, use asyncio for new greenfield async applications. For the threading alternative — Python threads share a GIL, limiting CPU parallelism while incurring thread-creation and context-switch overhead; gevent greenlets are lighter (microseconds to switch vs milliseconds for threads) and work well for I/O-bound concurrency with thousands of concurrent connections — use gevent for high-concurrency I/O tasks, threading for CPU-bound work that releases the GIL (C extensions). The Claude Skills 360 bundle includes gevent skill sets covering monkey.patch_all() positioning, run_concurrent() Pool-bounded parallel map, run_with_timeout() cancellable I/O, map_parallel() ordered and unordered, producer_consumer() queue pattern, WorkQueue class with start/put/wait/stop, RateLimitedFetcher token bucket, gevent WSGI server with Flask integration, and JoinableQueue task_done() synchronization. Start with the free tier to try gevent cooperative concurrency code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free