Claude Code for Optuna: Hyperparameter Optimization — Claude Skills 360 Blog
Blog / AI / Claude Code for Optuna: Hyperparameter Optimization
AI

Claude Code for Optuna: Hyperparameter Optimization

Published: September 6, 2027
Read time: 5 min read
By: Claude Skills 360

Optuna finds the best hyperparameters with Bayesian optimization. pip install optuna. study = optuna.create_study(direction="maximize", study_name="churn-gbm", sampler=optuna.samplers.TPESampler()). The objective function: def objective(trial): then trial.suggest_float("lr", 1e-4, 0.1, log=True), trial.suggest_int("n_estimators", 50, 500), trial.suggest_categorical("criterion", ["gini", "entropy"]). Return the metric to optimize. study.optimize(objective, n_trials=100, timeout=3600, n_jobs=4). study.best_params, study.best_value, study.best_trial. Pruning: trial.should_prune() inside training loop — pruner=optuna.pruners.MedianPruner(n_startup_trials=5) stops unpromising trials early. Intermediate values: trial.report(value, step) then if trial.should_prune(): raise optuna.TrialPruned(). Persistent storage: storage="postgresql://user:pass@host/optuna" or storage="sqlite:///optuna.db" — enables parallelism across processes. Visualization: optuna.visualization.plot_optimization_history(study), plot_param_importances(study), plot_contour(study, params=["lr", "max_depth"]). LightGBM integration: LightGBMTunerCV auto-tunes all LightGBM params. optuna.integration.PyTorchLightningPruningCallback(trial, monitor="val_loss") for Lightning. optuna.integration.MLflowCallback logs each trial to MLflow. Dashboard: optuna-dashboard sqlite:///optuna.db. study.trials_dataframe() exports as pandas DataFrame. optuna.copy_study and study.add_trial for warm starts. Claude Code generates Optuna objective functions, sampler configs, pruning setups, parallel study configs, and integration callbacks.

CLAUDE.md for Optuna

## Optuna Stack
- Version: optuna >= 3.5
- Study: optuna.create_study(direction, sampler=TPESampler()/CmaEsSampler(), pruner=MedianPruner())
- Trials: trial.suggest_float(name, low, high, log=True) / suggest_int / suggest_categorical
- Prune: trial.report(value, step); if trial.should_prune(): raise optuna.TrialPruned()
- Run: study.optimize(objective, n_trials=100, n_jobs=-1, timeout=3600)
- Best: study.best_params — dict, study.best_value — float
- Storage: optuna.create_study(storage="postgresql://..." or "sqlite:///") for parallel
- Viz: optuna.visualization.plot_param_importances(study) etc.

Objective Functions

# optimization/optuna_search.py — Optuna hyperparameter optimization
from __future__ import annotations
import pickle
from typing import Any

import numpy as np
import optuna
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

optuna.logging.set_verbosity(optuna.logging.WARNING)

FEATURE_COLS = ["age", "tenure_days", "monthly_spend", "support_tickets", "last_login_days"]
TARGET_COL   = "churned"


def load_data(path: str = "data/train.csv") -> tuple[np.ndarray, np.ndarray]:
    df = pd.read_csv(path)
    return df[FEATURE_COLS].values, df[TARGET_COL].values


# ── Objective: GradientBoosting ───────────────────────────────────────────────

def gbm_objective(trial: optuna.Trial) -> float:
    """Objective for tuning GradientBoostingClassifier."""
    params = {
        "n_estimators":      trial.suggest_int("n_estimators", 50, 600, step=50),
        "learning_rate":     trial.suggest_float("learning_rate", 1e-3, 0.3, log=True),
        "max_depth":         trial.suggest_int("max_depth", 2, 8),
        "min_samples_leaf":  trial.suggest_int("min_samples_leaf", 5, 100, log=True),
        "subsample":         trial.suggest_float("subsample", 0.5, 1.0),
        "max_features":      trial.suggest_categorical("max_features", ["sqrt", "log2", None]),
        "min_samples_split": trial.suggest_int("min_samples_split", 2, 20),
    }

    X, y = load_data()
    cv   = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
    aucs = []

    for fold, (train_idx, val_idx) in enumerate(cv.split(X, y)):
        X_tr, X_val = X[train_idx], X[val_idx]
        y_tr, y_val = y[train_idx], y[val_idx]

        pipeline = Pipeline([
            ("scaler", StandardScaler()),
            ("clf",    GradientBoostingClassifier(**params, random_state=42)),
        ])

        # Use warm_start to allow intermediate reporting for pruning
        pipeline.fit(X_tr, y_tr)
        auc = roc_auc_score(y_val, pipeline.predict_proba(X_val)[:, 1])
        aucs.append(auc)

        # Report intermediate for pruning
        trial.report(float(np.mean(aucs)), step=fold)
        if trial.should_prune():
            raise optuna.TrialPruned()

    return float(np.mean(aucs))


# ── Objective: model type selection ──────────────────────────────────────────

def multi_model_objective(trial: optuna.Trial) -> float:
    """Objective that also searches over model architecture."""
    model_type = trial.suggest_categorical("model_type", ["gbm", "rf"])
    X, y = load_data()
    cv   = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)

    if model_type == "gbm":
        clf = GradientBoostingClassifier(
            n_estimators=trial.suggest_int("n_estimators", 50, 400),
            learning_rate=trial.suggest_float("lr", 1e-3, 0.2, log=True),
            max_depth=trial.suggest_int("max_depth", 2, 6),
            random_state=42,
        )
    else:  # rf
        clf = RandomForestClassifier(
            n_estimators=trial.suggest_int("n_estimators", 50, 400),
            max_depth=trial.suggest_int("max_depth", 3, 15),
            min_samples_leaf=trial.suggest_int("min_samples_leaf", 1, 20),
            class_weight=trial.suggest_categorical("class_weight", [None, "balanced"]),
            random_state=42,
        )

    pipeline = Pipeline([("scaler", StandardScaler()), ("clf", clf)])
    aucs = []
    for train_idx, val_idx in cv.split(X, y):
        pipeline.fit(X[train_idx], y[train_idx])
        aucs.append(roc_auc_score(y[val_idx], pipeline.predict_proba(X[val_idx])[:, 1]))

    return float(np.mean(aucs))


# ── Study creation and optimization ──────────────────────────────────────────

def run_optimization(
    n_trials:   int = 100,
    n_jobs:     int = 4,
    storage:    str = "sqlite:///optuna_churn.db",
    study_name: str = "churn-gbm",
) -> optuna.Study:
    """Create or load a study and run optimization."""
    study = optuna.create_study(
        study_name=study_name,
        direction="maximize",
        storage=storage,
        load_if_exists=True,
        sampler=optuna.samplers.TPESampler(
            n_startup_trials=10,
            multivariate=True,
            constant_liar=True,       # Better for parallel execution
        ),
        pruner=optuna.pruners.MedianPruner(
            n_startup_trials=5,
            n_warmup_steps=1,
        ),
    )

    study.optimize(
        gbm_objective,
        n_trials=n_trials,
        n_jobs=n_jobs,
        timeout=None,
        show_progress_bar=True,
        callbacks=[
            optuna.study.MaxTrialsCallback(n_trials, states=[optuna.trial.TrialState.COMPLETE]),
        ],
    )

    print(f"\nBest AUC:    {study.best_value:.4f}")
    print(f"Best params: {study.best_params}")
    return study


# ── Retrain best model ────────────────────────────────────────────────────────

def retrain_best(study: optuna.Study, output_path: str = "best_model.pkl") -> Pipeline:
    """Retrain on full data using best hyperparameters."""
    X, y = load_data()
    params = study.best_params.copy()

    pipeline = Pipeline([
        ("scaler", StandardScaler()),
        ("clf",    GradientBoostingClassifier(**params, random_state=42)),
    ])
    pipeline.fit(X, y)

    with open(output_path, "wb") as f:
        pickle.dump(pipeline, f)
    print(f"Best model saved to {output_path}")
    return pipeline


if __name__ == "__main__":
    study = run_optimization(n_trials=100, n_jobs=2)
    retrain_best(study)

LightGBM Integration

# optimization/lgbm_optuna.py — LightGBMLTuner auto-tuning
from __future__ import annotations
import lightgbm as lgb
import numpy as np
import optuna
import optuna.integration.lightgbm as lgb_optuna
import pandas as pd
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split

optuna.logging.set_verbosity(optuna.logging.WARNING)

FEATURE_COLS = ["age", "tenure_days", "monthly_spend", "support_tickets", "last_login_days"]


def tune_lightgbm(data_path: str = "data/train.csv") -> dict:
    """Use LightGBMTunerCV to auto-tune all LightGBM hyperparameters."""
    df      = pd.read_csv(data_path)
    X_train, X_val, y_train, y_val = train_test_split(
        df[FEATURE_COLS].values, df["churned"].values,
        test_size=0.2, stratify=df["churned"].values, random_state=42,
    )

    dtrain = lgb.Dataset(X_train, label=y_train)
    dval   = lgb.Dataset(X_val,   label=y_val, reference=dtrain)

    params = {
        "objective":    "binary",
        "metric":       "auc",
        "verbosity":    -1,
        "boosting_type": "gbdt",
    }

    # LightGBMTunerCV handles the full parameter search automatically
    tuner = lgb_optuna.LightGBMTunerCV(
        params,
        dtrain,
        num_boost_round=1000,
        folds=5,
        seed=42,
        callbacks=[lgb.early_stopping(50), lgb.log_evaluation(-1)],
    )
    tuner.run()

    best_params = tuner.best_params
    print(f"Best AUC (CV): {tuner.best_score:.4f}")
    print(f"Best params: {best_params}")
    return best_params


def lgbm_objective_manual(trial: optuna.Trial) -> float:
    """Manual LightGBM objective for full control."""
    import lightgbm as lgb

    df = pd.read_csv("data/train.csv")
    X, y = df[FEATURE_COLS].values, df["churned"].values
    X_tr, X_val, y_tr, y_val = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)

    params = {
        "objective":        "binary",
        "metric":           "auc",
        "verbosity":        -1,
        "num_leaves":       trial.suggest_int("num_leaves", 20, 300),
        "learning_rate":    trial.suggest_float("learning_rate", 1e-3, 0.3, log=True),
        "feature_fraction": trial.suggest_float("feature_fraction", 0.4, 1.0),
        "bagging_fraction": trial.suggest_float("bagging_fraction", 0.4, 1.0),
        "bagging_freq":     trial.suggest_int("bagging_freq", 1, 7),
        "min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
        "reg_alpha":        trial.suggest_float("reg_alpha", 1e-8, 10.0, log=True),
        "reg_lambda":       trial.suggest_float("reg_lambda", 1e-8, 10.0, log=True),
    }

    callbacks = [
        lgb.early_stopping(30, verbose=False),
        lgb.log_evaluation(-1),
        optuna.integration.lightgbm.LightGBMPruningCallback(trial, "auc"),
    ]

    model = lgb.train(
        params,
        lgb.Dataset(X_tr, label=y_tr),
        num_boost_round=500,
        valid_sets=[lgb.Dataset(X_val, label=y_val)],
        callbacks=callbacks,
    )

    return float(model.best_score["valid_0"]["auc"])

Visualization and Dashboard

# scripts/visualize_study.py — Optuna visualization + dashboard
import optuna
import optuna.visualization as vis
import plotly.io as pio


def analyze_study(study_name: str, storage: str = "sqlite:///optuna_churn.db") -> None:
    study = optuna.load_study(study_name=study_name, storage=storage)

    print(f"Completed trials: {len([t for t in study.trials if t.state == optuna.trial.TrialState.COMPLETE])}")
    print(f"Pruned trials:    {len([t for t in study.trials if t.state == optuna.trial.TrialState.PRUNED])}")
    print(f"Best value: {study.best_value:.4f}")
    print(f"Best params: {study.best_params}")

    # Export top-10 trials
    df = study.trials_dataframe()
    print("\nTop 10 trials by AUC:")
    print(df.nlargest(10, "value")[["number", "value", "params_n_estimators", "params_learning_rate", "params_max_depth"]])

    # Save plots
    fig = vis.plot_optimization_history(study)
    pio.write_html(fig, "reports/optimization_history.html")

    fig = vis.plot_param_importances(study)
    pio.write_html(fig, "reports/param_importances.html")

    fig = vis.plot_contour(study, params=["learning_rate", "max_depth"])
    pio.write_html(fig, "reports/contour_lr_depth.html")

    fig = vis.plot_parallel_coordinate(study)
    pio.write_html(fig, "reports/parallel_coordinate.html")

    print("\nCharts saved to reports/")
    print("Run: optuna-dashboard sqlite:///optuna_churn.db  # for interactive UI")


if __name__ == "__main__":
    analyze_study("churn-gbm")

For the Ray Tune alternative when needing distributed hyperparameter search across a Ray cluster with Population Based Training, ASHA scheduler, and integration with any ML framework including custom training loops — Ray Tune scales across many machines while Optuna is simpler to set up for single-machine or multi-process search, supports more sampler algorithms out of the box (TPE, CMA-ES, QMC), and has the LightGBMTuner auto-integration. For the Weights & Biases Sweeps alternative when already using W&B for experiment tracking and wanting hyperparameter search tightly integrated with your W&B dashboard, artifact logging, and team-visible run comparisons — W&B Sweeps are simpler when you’re already logged into W&B while Optuna works as a standalone library with no external service dependency and supports pluggable storage backends. The Claude Skills 360 bundle includes Optuna skill sets covering objective functions, TPE and CMA-ES samplers, pruning, LightGBM integration, and visualization. Start with the free tier to try hyperparameter optimization generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free