Claude Code for SHAP: Model Explainability — Claude Skills 360 Blog
Blog / AI / Claude Code for SHAP: Model Explainability
AI

Claude Code for SHAP: Model Explainability

Published: November 8, 2027
Read time: 5 min read
By: Claude Skills 360

SHAP explains any model’s predictions using Shapley values from game theory. pip install shap. import shap. Tree models: explainer = shap.TreeExplainer(model), shap_values = explainer(X) — returns shap.Explanation object. Access values: shap_values.values (N, features), .base_values, .data. Single observation: shap_values[0]. Binary: shap_values[:, :, 1] for positive class. Summary plot: shap.summary_plot(shap_values, X, plot_type="beeswarm") — shows all features and samples. Bar plot: shap.plots.bar(shap_values) — mean |SHAP| per feature. Waterfall: shap.plots.waterfall(shap_values[0]) — single prediction breakdown. Force plot: shap.plots.force(shap_values[0]) — HTML interactive. Dependence: shap.dependence_plot("feature_name", shap_values.values, X) — partial dependence with interaction. Linear: shap.LinearExplainer(linear_model, X_train). Deep: shap.DeepExplainer(keras_model, background). Kernel (any model): shap.KernelExplainer(predict_fn, shap.sample(X, 100)). Mean absolute: np.abs(shap_values.values).mean(axis=0) — global importance. Interaction values: tree_explainer.shap_interaction_values(X) — (N, features, features). shap.Explanation(values, base_values, data, feature_names) for custom. Claude Code generates SHAP explainability dashboards, feature importance reporters, fairness auditors, and model debugging scripts.

CLAUDE.md for SHAP

## SHAP Stack
- Version: shap >= 0.44
- Tree: shap.TreeExplainer(model)(X) → Explanation object
- Linear: shap.LinearExplainer(linear_model, X_background)(X)
- Deep: shap.DeepExplainer(nn_model, X_background)(X)
- Any model: shap.KernelExplainer(predict_fn, shap.sample(X, 100))(X)
- Access: .values (N,F) | .base_values | .data
- Global: shap.plots.bar(shap_values) | summary_plot(sv, X, "beeswarm")
- Local: shap.plots.waterfall(shap_values[i]) | force_plot(shap_values[i])
- Binary: shap_values[:,:,1] for positive class probabilities

SHAP Explainability Pipeline

# ml/shap_pipeline.py — model explainability with SHAP
from __future__ import annotations
import warnings
import numpy as np
import pandas as pd
from pathlib import Path

import shap
import matplotlib.pyplot as plt
import matplotlib
matplotlib.use("Agg")  # Non-interactive backend

warnings.filterwarnings("ignore")


# ── 1. Explainer creation ─────────────────────────────────────────────────────

def create_tree_explainer(
    model,
    feature_perturbation: str = "tree_path_dependent",
    model_output:         str = "raw",   # "raw" | "probability" | "log_loss"
) -> shap.TreeExplainer:
    """
    Create TreeExplainer for tree-based models.
    Supports XGBoost, LightGBM, scikit-learn trees, CatBoost, etc.
    
    feature_perturbation:
    - "tree_path_dependent": exact SHAP (fast, no background needed)
    - "interventional":      causal SHAP (requires background data)
    
    model_output:
    - "raw":         log-odds for classifiers, raw value for regressors
    - "probability": probability scale (slower)
    """
    return shap.TreeExplainer(
        model,
        feature_perturbation=feature_perturbation,
        model_output=model_output,
    )


def create_linear_explainer(
    model,
    X_background: np.ndarray | pd.DataFrame,
    masker_type:  str = "independent",
) -> shap.LinearExplainer:
    """
    Create LinearExplainer for linear models (LogisticRegression, Ridge, Lasso, ElasticNet).
    X_background should be a representative sample (~100-1000 rows).
    """
    masker = shap.maskers.Independent(X_background)
    return shap.LinearExplainer(model, masker)


def create_deep_explainer(
    model,
    X_background: np.ndarray,
    n_background: int = 100,
) -> shap.DeepExplainer:
    """
    Create DeepExplainer for Keras/PyTorch neural networks.
    X_background: representative background samples for baseline expectation.
    """
    bg = shap.sample(X_background, n_background) if len(X_background) > n_background else X_background
    return shap.DeepExplainer(model, bg)


def create_kernel_explainer(
    predict_fn,
    X_background: np.ndarray | pd.DataFrame,
    n_background: int = 100,
    link:         str = "identity",   # "identity" | "logit"
) -> shap.KernelExplainer:
    """
    Model-agnostic KernelExplainer (works with ANY predict function).
    Slower than TreeExplainer — use for black-box / custom models.
    link="logit" for f(x)=probability → SHAP values in log-odds space.
    """
    bg = shap.sample(X_background, n_background)
    return shap.KernelExplainer(predict_fn, bg, link=link)


# ── 2. Computing SHAP values ──────────────────────────────────────────────────

def compute_shap_values(
    explainer,
    X:          np.ndarray | pd.DataFrame,
    check_additivity: bool = False,
) -> shap.Explanation:
    """
    Compute SHAP values for a dataset.
    Returns shap.Explanation with .values (N, F), .base_values, .data.
    check_additivity: set False for speed (disable SHAP sum check).
    """
    return explainer(X, check_additivity=check_additivity)


def shap_values_binary(shap_vals: shap.Explanation) -> shap.Explanation:
    """
    For binary classifiers that return (N, F, 2) SHAP values,
    extract the positive class (index 1).
    """
    if shap_vals.values.ndim == 3:
        return shap_vals[:, :, 1]
    return shap_vals


# ── 3. Global feature importance ─────────────────────────────────────────────

def global_importance(
    shap_vals:     shap.Explanation,
    feature_names: list[str] = None,
    top_n:         int = 20,
) -> pd.DataFrame:
    """
    Compute mean |SHAP| for global feature importance.
    Returns DataFrame sorted by importance descending.
    """
    vals  = shap_vals.values if hasattr(shap_vals, "values") else shap_vals
    if vals.ndim == 3:
        vals = vals[:, :, 1]

    names = feature_names or (
        list(shap_vals.feature_names) if hasattr(shap_vals, "feature_names") and shap_vals.feature_names is not None
        else [f"feature_{i}" for i in range(vals.shape[1])]
    )
    mean_abs = np.abs(vals).mean(axis=0)
    df = pd.DataFrame({"feature": names, "mean_abs_shap": mean_abs})
    return df.sort_values("mean_abs_shap", ascending=False).head(top_n).reset_index(drop=True)


def feature_direction(
    shap_vals:     shap.Explanation,
    X:             np.ndarray | pd.DataFrame,
    feature_names: list[str] = None,
) -> pd.DataFrame:
    """
    Compute correlation between feature value and SHAP value.
    Positive = feature increases prediction; negative = decreases.
    """
    vals  = shap_vals.values if hasattr(shap_vals, "values") else shap_vals
    if vals.ndim == 3:
        vals = vals[:, :, 1]
    X_arr = X.values if isinstance(X, pd.DataFrame) else X
    names = feature_names or [f"f{i}" for i in range(vals.shape[1])]

    rows = []
    for i, name in enumerate(names):
        corr = float(np.corrcoef(X_arr[:, i], vals[:, i])[0, 1])
        rows.append({"feature": name, "corr": round(corr, 4),
                     "direction": "positive" if corr > 0 else "negative"})
    return pd.DataFrame(rows).sort_values("corr", key=abs, ascending=False)


# ── 4. Individual prediction explanation ─────────────────────────────────────

def explain_prediction(
    shap_vals:     shap.Explanation,
    idx:           int = 0,
    X:             pd.DataFrame = None,
    top_n:         int = 10,
) -> dict:
    """
    Explain a single prediction.
    Returns base_value, prediction, and top feature contributions.
    """
    sv = shap_vals[idx]
    if sv.values.ndim == 2:    # Multi-class
        sv = shap_vals[idx, :, 1]

    base   = float(sv.base_values) if hasattr(sv, "base_values") else 0.0
    values = sv.values if hasattr(sv, "values") else sv
    names  = list(sv.feature_names) if hasattr(sv, "feature_names") and sv.feature_names is not None else None

    if names is None:
        names = list(X.columns) if X is not None else [f"f{i}" for i in range(len(values))]

    contribs = sorted(zip(names, values), key=lambda x: abs(x[1]), reverse=True)[:top_n]
    prediction = base + float(np.sum(sv.values if hasattr(sv, "values") else values))

    return {
        "base_value":    round(base, 4),
        "prediction":    round(prediction, 4),
        "top_features":  [(n, round(float(v), 4)) for n, v in contribs],
    }


# ── 5. Plotting ───────────────────────────────────────────────────────────────

def save_summary_plot(
    shap_vals:     shap.Explanation,
    X:             pd.DataFrame,
    output_path:   str = "shap_summary.png",
    plot_type:     str = "beeswarm",   # "beeswarm" | "bar" | "dot" | "violin"
    max_display:   int = 20,
) -> str:
    """Save SHAP summary (beeswarm) plot as PNG."""
    fig, ax = plt.subplots(figsize=(10, max_display * 0.4 + 2))
    shap.summary_plot(
        shap_vals, X,
        plot_type=plot_type,
        max_display=max_display,
        show=False,
    )
    plt.tight_layout()
    plt.savefig(output_path, dpi=120, bbox_inches="tight")
    plt.close()
    print(f"Summary plot saved: {output_path}")
    return output_path


def save_waterfall_plot(
    shap_vals:   shap.Explanation,
    idx:         int = 0,
    output_path: str = "shap_waterfall.png",
    max_display: int = 15,
) -> str:
    """Save waterfall plot for a single prediction."""
    sv = shap_vals[idx]
    if sv.values.ndim == 2:
        sv = shap_vals[idx, :, 1]
    plt.figure(figsize=(10, max_display * 0.4 + 2))
    shap.plots.waterfall(sv, max_display=max_display, show=False)
    plt.tight_layout()
    plt.savefig(output_path, dpi=120, bbox_inches="tight")
    plt.close()
    print(f"Waterfall plot saved: {output_path}")
    return output_path


def save_bar_plot(
    shap_vals:   shap.Explanation,
    output_path: str = "shap_bar.png",
    max_display: int = 20,
) -> str:
    """Save global importance bar chart."""
    plt.figure(figsize=(8, max_display * 0.4 + 2))
    shap.plots.bar(shap_vals, max_display=max_display, show=False)
    plt.tight_layout()
    plt.savefig(output_path, dpi=120, bbox_inches="tight")
    plt.close()
    return output_path


def save_dependence_plot(
    shap_vals:     shap.Explanation,
    feature:       str,
    X:             pd.DataFrame,
    interaction:   str = "auto",
    output_path:   str = None,
) -> str:
    """
    Save dependence plot showing SHAP value vs feature value.
    interaction="auto" picks the feature with strongest interaction.
    """
    output_path = output_path or f"shap_dep_{feature}.png"
    plt.figure(figsize=(8, 5))
    vals = shap_vals.values if shap_vals.values.ndim == 2 else shap_vals.values[:, :, 1]
    shap.dependence_plot(feature, vals, X,
                         interaction_index=interaction, show=False)
    plt.tight_layout()
    plt.savefig(output_path, dpi=120, bbox_inches="tight")
    plt.close()
    return output_path


# ── 6. Model debugging / fairness ─────────────────────────────────────────────

def shap_by_subgroup(
    shap_vals:      shap.Explanation,
    group_labels:   np.ndarray,
    feature_names:  list[str] = None,
) -> pd.DataFrame:
    """
    Compare mean |SHAP| importance across subgroups.
    Useful for detecting fairness issues (different feature usage per group).
    """
    vals   = shap_vals.values
    if vals.ndim == 3:
        vals = vals[:, :, 1]
    names  = feature_names or [f"f{i}" for i in range(vals.shape[1])]
    groups = np.unique(group_labels)

    rows = {}
    for g in groups:
        mask   = group_labels == g
        rows[f"group_{g}"] = np.abs(vals[mask]).mean(axis=0)

    return pd.DataFrame(rows, index=names).sort_values(f"group_{groups[0]}", ascending=False)


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    from sklearn.datasets import make_classification
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.model_selection import train_test_split

    print("SHAP Explainability Demo")
    print("="*50)

    X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, random_state=42)
    feature_names = [f"feature_{i}" for i in range(10)]
    X_df = pd.DataFrame(X, columns=feature_names)

    X_tr, X_te, y_tr, y_te = train_test_split(X_df, y, test_size=0.2, random_state=42)

    # Train model
    model = RandomForestClassifier(n_estimators=100, random_state=42)
    model.fit(X_tr, y_tr)
    print(f"Model accuracy: {model.score(X_te, y_te):.3f}")

    # SHAP
    explainer  = create_tree_explainer(model)
    shap_vals  = compute_shap_values(explainer, X_te)
    shap_vals  = shap_values_binary(shap_vals)

    # Global importance
    imp = global_importance(shap_vals, feature_names=feature_names, top_n=5)
    print(f"\nGlobal feature importance (top 5):\n{imp}")

    # Individual explanation
    explanation = explain_prediction(shap_vals, idx=0, X=X_te)
    print(f"\nPrediction explanation (sample 0):")
    print(f"  base_value = {explanation['base_value']}")
    print(f"  prediction = {explanation['prediction']}")
    print(f"  top features: {explanation['top_features'][:3]}")

    # Feature direction
    dirs = feature_direction(shap_vals, X_te, feature_names)
    print(f"\nFeature directions (top 5):\n{dirs.head()}")

    # Save plots
    save_summary_plot(shap_vals, X_te, "/tmp/shap_summary.png")
    save_waterfall_plot(shap_vals, idx=0, output_path="/tmp/shap_waterfall.png")
    save_bar_plot(shap_vals, output_path="/tmp/shap_bar.png")

For the LIME alternative when explaining single predictions with locally faithful linear approximations that are easier to communicate to non-technical stakeholders — LIME produces simple “this is why” summaries while SHAP’s Shapley values are the only explanation method with all four axiomatic fairness properties (efficiency, symmetry, dummy, linearity), meaning they uniquely sum to the prediction gap from the baseline and correctly attribute shared contributions, making SHAP the standard for regulatory compliance (GDPR right-to-explanation, model audit trails) where attribution accuracy matters more than communication simplicity. For the scikit-learn feature_importances_ alternative when needing quick impurity-based importance — sklearn’s built-in importance is computed during training and biased toward high-cardinality and correlated features while SHAP’s TreeExplainer produces consistent, interaction-aware importances in milliseconds (same speed as predict) and the dependence_plot reveals non-linear feature effects that scikit-learn’s single importance score completely hides. The Claude Skills 360 bundle includes SHAP skill sets covering TreeExplainer for gradient boosting, LinearExplainer, DeepExplainer for neural networks, KernelExplainer for any model, global importance, waterfall and force plots, beeswarm summary, dependence plots, interaction values, and subgroup fairness analysis. Start with the free tier to try model explainability code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free