Claude Code for cachetools: Python In-Memory Caching — Claude Skills 360 Blog
Blog / AI / Claude Code for cachetools: Python In-Memory Caching
AI

Claude Code for cachetools: Python In-Memory Caching

Published: January 26, 2028
Read time: 5 min read
By: Claude Skills 360

cachetools provides in-memory cache data structures. pip install cachetools. LRU: from cachetools import LRUCache; cache = LRUCache(maxsize=128). cache["key"] = value. cache.get("key"). cache.maxsize, cache.currsize. Evicts least-recently-used when full. TTL: from cachetools import TTLCache; cache = TTLCache(maxsize=128, ttl=300) — items expire after 300s. LFU: from cachetools import LFUCache — evicts least-frequently-used. RR: from cachetools import RRCache — random eviction. Decorator: from cachetools import cached; @cached(cache=LRUCache(maxsize=256)). Method: from cachetools import cachedmethod; class Repo: _cache = {}; @cachedmethod(lambda self: self._cache). Key: from cachetools.keys import hashkey, typedkey. @cached(cache={}, key=lambda a, b: hashkey(a, b)). typedkey: 1 and 1.0 get different entries. TTL decorator: @cached(cache=TTLCache(maxsize=64, ttl=60)). Thread safe: from threading import RLock; lock = RLock(); @cached(cache=LRUCache(maxsize=128), lock=lock). Invalidate: cache.pop("key", None). cache.clear(). del cache["key"]. Size: len(cache), cache.currsize, cache.maxsize. Dict-like: supports __contains__, keys(), values(), items(). MutableMapping: compatible with all dict operations. Miss: returns KeyError — use .get(key, default). popitem(): evicts one item per cache policy. Claude Code generates cachetools decorators, TTL wrappers, and thread-safe cache patterns.

CLAUDE.md for cachetools

## cachetools Stack
- Version: cachetools >= 5.3 | pip install cachetools
- LRU: LRUCache(maxsize=128) — evict least-recently-used on overflow
- TTL: TTLCache(maxsize=128, ttl=300) — auto-expire after N seconds
- Decorator: @cached(cache=LRUCache(128)) | @cached(TTLCache(64, ttl=60))
- Method: @cachedmethod(lambda self: self._cache, key=hashkey)
- Thread: @cached(cache=..., lock=RLock()) — add lock for concurrent access
- Invalidate: cache.pop(key, None) | cache.clear() — manual eviction

cachetools In-Memory Cache Pipeline

# app/cache.py — cachetools LRU, TTL, and decorator patterns
from __future__ import annotations

import time
from threading import RLock
from typing import Any, Callable, TypeVar

from cachetools import LFUCache, LRUCache, RRCache, TTLCache, cached, cachedmethod
from cachetools.keys import hashkey, typedkey

T = TypeVar("T")


# ─────────────────────────────────────────────────────────────────────────────
# 1. Cache instances — direct dict-like usage
# ─────────────────────────────────────────────────────────────────────────────

def demo_cache_types() -> None:
    # LRU — great for repeated access patterns (API responses, parsed configs)
    lru: LRUCache = LRUCache(maxsize=3)
    lru["a"] = 1
    lru["b"] = 2
    lru["c"] = 3
    lru["d"] = 4   # evicts "a" (least recently used)
    assert "a" not in lru
    assert "d" in lru
    print(f"LRU: {dict(lru)}")

    # TTL — great for external API responses that go stale
    ttl: TTLCache = TTLCache(maxsize=128, ttl=0.1)
    ttl["token"] = "abc123"
    assert "token" in ttl
    time.sleep(0.15)
    assert "token" not in ttl   # expired
    print("TTL: expired as expected")

    # LFU — great when access frequency predicts future access
    lfu: LFUCache = LFUCache(maxsize=3)
    lfu["hot"] = "frequent"
    for _ in range(5):
        _ = lfu["hot"]          # boost frequency
    lfu["cold1"] = "rare"
    lfu["cold2"] = "rare"
    lfu["cold3"] = "new"        # evicts one of the cold items
    assert "hot" in lfu
    print(f"LFU size: {lfu.currsize}/{lfu.maxsize}")

    # RR — random eviction, O(1), no eviction-order bookkeeping
    rr: RRCache = RRCache(maxsize=4)
    for i in range(6):
        rr[i] = i * 10
    assert rr.currsize == 4
    print(f"RR: {dict(rr)}")


# ─────────────────────────────────────────────────────────────────────────────
# 2. @cached decorator — automatic function memoization
# ─────────────────────────────────────────────────────────────────────────────

# LRU memoization — pure functions that are expensive to compute
@cached(cache=LRUCache(maxsize=256))
def fibonacci(n: int) -> int:
    if n < 2:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)


# TTL cache — results valid for 5 minutes
@cached(cache=TTLCache(maxsize=64, ttl=300))
def get_exchange_rate(from_currency: str, to_currency: str) -> float:
    """Simulates an expensive external API call."""
    time.sleep(0.05)
    rates = {"USD_EUR": 0.92, "EUR_USD": 1.09, "USD_GBP": 0.79}
    return rates.get(f"{from_currency}_{to_currency}", 1.0)


# Custom key — cache by (query, page) but ignore per-request timestamps
_query_cache: LRUCache = LRUCache(maxsize=128)


@cached(
    cache=_query_cache,
    key=lambda query, page=1, **_kwargs: hashkey(query.strip().lower(), page),
)
def search_products(query: str, page: int = 1, timestamp: float = 0) -> list[dict]:
    """timestamp parameter is ignored in cache key via custom key function."""
    time.sleep(0.02)
    return [{"id": i, "name": f"{query} result {i}", "page": page} for i in range(10)]


# typedkey — treat 1 (int) and 1.0 (float) as different keys
@cached(cache=LRUCache(maxsize=64), key=typedkey)
def typed_lookup(value: int | float) -> str:
    return f"{type(value).__name__}:{value}"


# ─────────────────────────────────────────────────────────────────────────────
# 3. Thread-safe caching — concurrent access
# ─────────────────────────────────────────────────────────────────────────────

_users_cache: TTLCache = TTLCache(maxsize=512, ttl=120)
_users_lock = RLock()


@cached(cache=_users_cache, lock=_users_lock)
def get_user(user_id: int) -> dict:
    """Thread-safe TTL-cached DB lookup — safe for multi-threaded Flask/Django."""
    time.sleep(0.01)
    return {"id": user_id, "name": f"User {user_id}", "active": True}


def invalidate_user(user_id: int) -> None:
    """Manually evict a cache entry after an update."""
    key = hashkey(user_id)
    with _users_lock:
        _users_cache.pop(key, None)


# ─────────────────────────────────────────────────────────────────────────────
# 4. @cachedmethod — per-instance method caches
# ─────────────────────────────────────────────────────────────────────────────

class ProductRepository:
    """
    cachedmethod uses a getter to find the cache on the instance.
    Each ProductRepository instance has its own cache — useful when
    different repos point to different databases.
    """

    def __init__(self, max_cached: int = 256) -> None:
        self._cache: LRUCache = LRUCache(maxsize=max_cached)
        self._lock = RLock()

    @cachedmethod(
        cache=lambda self: self._cache,
        key=lambda self, product_id: hashkey(product_id),
        lock=lambda self: self._lock,
    )
    def get_product(self, product_id: int) -> dict:
        """Cached per-instance product lookup."""
        time.sleep(0.01)
        return {"id": product_id, "name": f"Product {product_id}", "price": product_id * 9.99}

    def invalidate(self, product_id: int) -> None:
        key = hashkey(product_id)
        with self._lock:
            self._cache.pop(key, None)

    @property
    def cache_info(self) -> dict:
        return {
            "size":    self._cache.currsize,
            "maxsize": self._cache.maxsize,
        }


# ─────────────────────────────────────────────────────────────────────────────
# 5. API client with TTL caching
# ─────────────────────────────────────────────────────────────────────────────

class WeatherClient:
    """
    Real-world pattern: wrap an external API with a TTL cache.
    Cache at the method level so each city gets its own TTL slot.
    """

    def __init__(self, api_key: str, ttl: float = 600) -> None:
        self._api_key = api_key
        self._cache: TTLCache = TTLCache(maxsize=256, ttl=ttl)
        self._lock = RLock()

    @cachedmethod(
        cache=lambda self: self._cache,
        key=lambda self, city, units="metric": hashkey(city.lower(), units),
        lock=lambda self: self._lock,
    )
    def get_weather(self, city: str, units: str = "metric") -> dict:
        """Cached for `ttl` seconds per (city, units) pair."""
        time.sleep(0.05)   # simulate HTTP
        return {
            "city":   city,
            "temp":   20.0,
            "units":  units,
            "fetched_at": time.time(),
        }

    def prefetch(self, cities: list[str]) -> None:
        """Warm the cache for a list of cities."""
        for city in cities:
            self.get_weather(city)

    @property
    def cache_size(self) -> int:
        return self._cache.currsize


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("=== Cache types ===")
    demo_cache_types()

    print("\n=== Fibonacci (LRU) ===")
    start = time.perf_counter()
    print(f"  fib(35) = {fibonacci(35)}")
    first = time.perf_counter() - start
    start = time.perf_counter()
    print(f"  fib(35) = {fibonacci(35)} (cached)")
    second = time.perf_counter() - start
    print(f"  First: {first*1000:.2f}ms  Cached: {second*1000:.4f}ms")

    print("\n=== Exchange rate (TTL) ===")
    r1 = get_exchange_rate("USD", "EUR")
    r2 = get_exchange_rate("USD", "EUR")   # cached
    print(f"  USD→EUR: {r1} (both calls return {r2})")

    print("\n=== Search (custom key, ignores timestamp) ===")
    r1 = search_products("laptop", page=1, timestamp=1000.0)
    r2 = search_products("laptop", page=1, timestamp=9999.0)   # same cache entry
    print(f"  same result: {r1[0] == r2[0]}")

    print("\n=== Thread-safe user cache ===")
    u = get_user(42)
    print(f"  {u['name']}")
    invalidate_user(42)
    u2 = get_user(42)   # re-fetched
    print(f"  after invalidate: {u2['name']}")

    print("\n=== ProductRepository ===")
    repo = ProductRepository(max_cached=10)
    p1 = repo.get_product(1)
    p2 = repo.get_product(1)   # cached
    print(f"  {p1['name']} — cache size: {repo.cache_info['size']}")

    print("\n=== WeatherClient ===")
    client = WeatherClient(api_key="demo", ttl=60)
    client.prefetch(["London", "Paris", "Tokyo"])
    print(f"  cached {client.cache_size} cities")
    w = client.get_weather("London")   # from cache
    print(f"  London: {w['temp']}°{w['units']}")

For the functools.lru_cache alternative — @functools.lru_cache(maxsize=128) provides LRU memoization for plain functions but does not support TTL expiration, cannot be selectively invalidated (only cache_clear() wipes everything), is not thread-safe by default for non-atomic operations, and does not expose a dict-like interface for introspection, while cachetools.TTLCache auto-expires stale entries, cache.pop(key) invalidates individual keys, the lock=RLock() parameter makes @cached thread-safe, and cache.currsize / cache.maxsize expose live capacity metrics. For the Redis caching alternative — Redis caching is durable, shared across processes and hosts, and survives application restarts, while cachetools lives in-process with microsecond access latency and zero network overhead — use cachetools for hot per-process caches (parsed configs, decoded JWTs, computed exchange rate conversions) that are cheap to recompute, and Redis for shared cross-process state that must survive restarts. The Claude Skills 360 bundle includes cachetools skill sets covering LRUCache/TTLCache/LFUCache/RRCache data structures, @cached decorator with LRU and TTL, custom key functions with hashkey and typedkey, @cachedmethod for per-instance caches, thread-safe @cached with RLock, manual invalidation with cache.pop, cache capacity monitoring, WeatherClient and ProductRepository real-world patterns, TTL expiry verification in tests, and cache warming with prefetch. Start with the free tier to try in-memory caching code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free