toolz provides functional utilities for Python — function composition, currying, and sequence/dict operations. pip install toolz. Pipe: from toolz import pipe; pipe(data, step1, step2, step3). Compose: from toolz import compose; f = compose(str, abs, int). Thread: from toolz import thread_last; thread_last(data, str.strip, str.lower). Curry: from toolz import curry; add = curry(lambda x, y: x+y); inc = add(1). Groupby: from toolz import groupby; groupby(len, ["cat","dog","frog"]). Countby: countby(len, ["cat","dog","frog"]) → {3:2, 4:1}. Reduceby: reduceby(key_fn, binop, seq). Valmap: from toolz import valmap; valmap(str.upper, d). Keymap: keymap(str.lower, d). Itemmap: itemmap(lambda kv: (kv[0], kv[1]*2), d). Merge: from toolz import merge; merge(d1, d2). Merge_with: merge_with(sum, {"a":1}, {"a":2}) → {"a":3}. Take: from toolz import take; list(take(3, seq)). Drop: list(drop(2, seq)). Partition: partition_all(3, range(10)) → chunks of 3. Sliding_window: sliding_window(3, range(5)). Concat: concat([list1, list2]). Interleave: interleave([[1,2],[3,4]]). Juxt: from toolz import juxt; f = juxt(min, max, sum); f([1,2,3]) → (1,3,6). Memoize: from toolz import memoize; @memoize. Frequencies: from toolz import frequencies; frequencies("aabb"). Unique: unique([1,2,1,3]). Pluck: pluck("name", records). Get: get(["a","b"], d). Claude Code generates toolz pipelines, data transformation functions, and functional aggregation helpers.
CLAUDE.md for toolz
## toolz Stack
- Version: toolz >= 0.12 | pip install toolz (cytoolz for C-speed: pip install cytoolz)
- Pipe: pipe(data, fn1, fn2, fn3) — left-to-right function chaining
- Compose: compose(fn3, fn2, fn1) — right-to-left (f(g(h(x))))
- Curry: curry(fn)(arg1) → partial | @curry decorator
- Dict ops: merge(d1,d2) | valmap/keymap | merge_with(reducer, d1, d2)
- Seq ops: groupby/countby/reduceby | take/drop/partition_all/sliding_window
- Utils: memoize | juxt | frequencies | unique | pluck | get
toolz Functional Pipeline
# app/functional.py — toolz pipe, compose, curry, groupby, merge, and sequence ops
from __future__ import annotations
from typing import Any, Callable, Iterable
from toolz import (
compose,
concat,
countby,
curry,
drop,
frequencies,
get,
groupby,
interleave,
interpose,
juxt,
keyfilter,
keymap,
memoize,
merge,
merge_with,
partition_all,
pipe,
pluck,
reduceby,
sliding_window,
take,
thread_last,
unique,
valfilter,
valmap,
)
# ─────────────────────────────────────────────────────────────────────────────
# 1. Pipeline helpers
# ─────────────────────────────────────────────────────────────────────────────
def make_pipeline(*fns: Callable) -> Callable:
"""
Build a reusable left-to-right function pipeline.
Example:
clean = make_pipeline(str.strip, str.lower, lambda s: s.replace("-", "_"))
clean(" Hello-World ") → "hello_world"
"""
def pipeline(value: Any) -> Any:
return pipe(value, *fns)
return pipeline
def compose_validators(*predicates: Callable) -> Callable:
"""
Return a function that passes iff all predicates return True.
Short-circuits on first False.
"""
def all_pass(value: Any) -> bool:
return all(p(value) for p in predicates)
return all_pass
# ─────────────────────────────────────────────────────────────────────────────
# 2. Curried helpers
# ─────────────────────────────────────────────────────────────────────────────
@curry
def safe_get(key: Any, default: Any, mapping: dict) -> Any:
"""Curried dict .get(key, default)."""
return mapping.get(key, default)
@curry
def field(attr: str, obj: Any) -> Any:
"""Curried attribute/key accessor (works on dicts and objects)."""
if isinstance(obj, dict):
return obj.get(attr)
return getattr(obj, attr, None)
@curry
def transform(fn: Callable, items: Iterable) -> list:
"""Curried map: apply fn to every item, return list."""
return list(map(fn, items))
@curry
def keep(predicate: Callable, items: Iterable) -> list:
"""Curried filter: keep items where predicate(item) is truthy."""
return list(filter(predicate, items))
@curry
def reject(predicate: Callable, items: Iterable) -> list:
"""Keep items where predicate is False (opposite of keep)."""
return [x for x in items if not predicate(x)]
# ─────────────────────────────────────────────────────────────────────────────
# 3. Sequence operations
# ─────────────────────────────────────────────────────────────────────────────
def chunks(seq: Iterable, size: int) -> list[list]:
"""Split seq into chunks of size. Last chunk may be smaller."""
return [list(c) for c in partition_all(size, seq)]
def windows(seq: Iterable, size: int) -> list[tuple]:
"""Return all overlapping windows of length size."""
return list(sliding_window(size, seq))
def flatten(seq: Iterable) -> list:
"""Flatten one level of nesting."""
return list(concat(seq))
def interleave_seqs(*seqs: Iterable) -> list:
"""Interleave multiple sequences element-by-element."""
return list(interleave(seqs))
def first_n(seq: Iterable, n: int) -> list:
"""Return the first n items."""
return list(take(n, seq))
def skip_n(seq: Iterable, n: int) -> list:
"""Return seq after skipping the first n items."""
return list(drop(n, seq))
def unique_by(seq: Iterable, key: Callable) -> list:
"""Return unique items by key function (preserves order)."""
return list(unique(seq, key=key))
def intersperse(sep: Any, seq: Iterable) -> list:
"""Insert sep between every pair of items."""
return list(interpose(sep, seq))
def running_windows(seq: list, fn: Callable, size: int) -> list:
"""Apply fn to each sliding window, return list of results."""
return [fn(window) for window in windows(seq, size)]
# ─────────────────────────────────────────────────────────────────────────────
# 4. Dict operations
# ─────────────────────────────────────────────────────────────────────────────
def deep_merge(*dicts: dict) -> dict:
"""Merge dicts left-to-right; last value wins for conflicting keys."""
return merge(*dicts)
def merge_sum(*dicts: dict) -> dict:
"""Merge dicts, summing values for shared keys."""
return merge_with(sum, *dicts)
def merge_lists(*dicts: dict) -> dict:
"""Merge dicts, concatenating list values for shared keys."""
return merge_with(lambda a, b: (a if isinstance(a, list) else [a]) +
(b if isinstance(b, list) else [b]),
*dicts)
def transform_values(d: dict, fn: Callable) -> dict:
"""Apply fn to every value."""
return valmap(fn, d)
def transform_keys(d: dict, fn: Callable) -> dict:
"""Apply fn to every key."""
return keymap(fn, d)
def filter_values(d: dict, predicate: Callable) -> dict:
"""Keep entries where predicate(value) is True."""
return valfilter(predicate, d)
def filter_keys(d: dict, predicate: Callable) -> dict:
"""Keep entries where predicate(key) is True."""
return keyfilter(predicate, d)
def pick(d: dict, keys: list) -> dict:
"""Extract a subset of keys from d."""
return keyfilter(lambda k: k in set(keys), d)
def omit(d: dict, keys: list) -> dict:
"""Return d without the specified keys."""
ks = set(keys)
return keyfilter(lambda k: k not in ks, d)
# ─────────────────────────────────────────────────────────────────────────────
# 5. Aggregation
# ─────────────────────────────────────────────────────────────────────────────
def group(items: Iterable, key: Callable) -> dict[Any, list]:
"""Group items by key function."""
return dict(groupby(key, items))
def count(items: Iterable, key: Callable) -> dict[Any, int]:
"""Count items by key function."""
return dict(countby(key, items))
def reduce_by(items: Iterable, key: Callable, binop: Callable, init: Any = None) -> dict:
"""Reduce items grouped by key."""
if init is not None:
return dict(reduceby(key, binop, items, init))
return dict(reduceby(key, binop, items))
def word_count(text: str) -> dict[str, int]:
"""Count word frequencies in a string."""
words = text.lower().split()
return dict(frequencies(words))
def pluck_field(records: Iterable[dict], field_name: str) -> list:
"""Extract field_name from every record."""
return list(pluck(field_name, records))
def by_field(records: Iterable[dict], field_name: str) -> dict:
"""Index records by field_name."""
return group(records, lambda r: r.get(field_name))
# ─────────────────────────────────────────────────────────────────────────────
# 6. Memoization
# ─────────────────────────────────────────────────────────────────────────────
def memoized(fn: Callable, cache: dict | None = None) -> Callable:
"""Wrap fn with toolz memoize. Optionally pass an explicit cache dict."""
if cache is not None:
return memoize(fn, cache=cache)
return memoize(fn)
# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────
if __name__ == "__main__":
print("=== pipe ===")
result = pipe(
" Hello, World! ",
str.strip,
str.lower,
lambda s: s.replace(",", ""),
)
print("pipe result:", result)
print("\n=== make_pipeline ===")
normalize = make_pipeline(str.strip, str.lower)
print("pipeline:", [normalize(x) for x in [" ALPHA ", "beta ", " GAMMA"]])
print("\n=== curry + transform ===")
double = transform(lambda x: x * 2)
print("double:", double([1, 2, 3, 4, 5]))
print("\n=== chunks ===")
print("chunks:", chunks(range(10), 3))
print("\n=== windows ===")
temps = [22, 24, 21, 25, 23, 26]
moving_avg = running_windows(temps, lambda w: sum(w)/len(w), 3)
print("3-day moving avg:", [round(v, 1) for v in moving_avg])
print("\n=== group / count ===")
words = ["apple", "banana", "avocado", "blueberry", "cherry", "apricot"]
by_first = group(words, lambda w: w[0])
cnt = count(words, lambda w: w[0])
print("group by first:", {k: v for k, v in by_first.items()})
print("count by first:", dict(cnt))
print("\n=== merge ops ===")
a = {"x": 1, "y": 2}
b = {"y": 3, "z": 4}
print("merge: ", deep_merge(a, b))
print("merge_sum: ", merge_sum(a, b))
print("\n=== dict transform ===")
d = {"Name": "Alice", "Age": 30, "Score": None}
print("lower keys:", transform_keys(d, str.lower))
print("no nulls: ", filter_values(d, lambda v: v is not None))
print("pick: ", pick(d, ["Name", "Score"]))
print("\n=== word_count ===")
text = "the cat sat on the mat the cat"
print("word freq:", word_count(text))
print("\n=== memoize ===")
call_count = [0]
def slow_fn(n):
call_count[0] += 1
return n * n
fast_fn = memoized(slow_fn)
_ = [fast_fn(x) for x in [2, 3, 2, 4, 3, 2]]
print(f"calls: {call_count[0]} (vs 6 without memo)")
print("\n=== juxt ===")
stats = juxt(min, max, sum, len)
data = [4, 7, 2, 9, 1, 6]
mn, mx, total, count_val = stats(data)
print(f"min={mn} max={mx} sum={total} count={count_val} mean={total/count_val:.1f}")
For the funcy alternative — funcy is a similar functional utility library with a partly overlapping API; toolz has a sister library cytoolz (C extension, same API but 2–5× faster) and is the more commonly cited functional Python toolkit; both are valid, but toolz + cytoolz has broader production adoption for data pipeline work. For the itertools / functools alternative — the stdlib itertools and functools cover many of the same operations, but toolz provides a unified API, consistent currying style, and dict utilities (valmap, merge_with) that the stdlib doesn’t offer in one place. The Claude Skills 360 bundle includes toolz skill sets covering pipe/compose pipelines, @curry decorator, make_pipeline/compose_validators, safe_get/field/transform/keep curry helpers, chunks/windows/flatten/unique_by/intersperse, group/count/reduce_by aggregation, transform_values/transform_keys/filter_values/filter_keys/pick/omit dict ops, merge_sum/merge_lists/deep_merge, word_count/pluck_field/by_field, memoized() caching, and juxt multi-function dispatch. Start with the free tier to try functional Python code generation.