Claude Code for Streamlit: Data Apps and ML Dashboards — Claude Skills 360 Blog
Blog / AI / Claude Code for Streamlit: Data Apps and ML Dashboards
AI

Claude Code for Streamlit: Data Apps and ML Dashboards

Published: October 2, 2027
Read time: 5 min read
By: Claude Skills 360

Streamlit turns Python scripts into data apps. pip install streamlit. streamlit run app.py. import streamlit as st. st.title("My App"), st.write("Hello!"), st.markdown("**Bold**"). Data: st.dataframe(df) renders interactive table, st.metric("MRR", "$12k", "+8%"). Charts: st.line_chart(df), st.plotly_chart(fig), st.altair_chart(chart). Inputs: x = st.slider("Value", 0, 100, 50), name = st.text_input("Name"), option = st.selectbox("Model", ["llama3","gpt4"]), file = st.file_uploader("Upload CSV"). Sidebar: with st.sidebar: st.selectbox(...). Session state: st.session_state.messages persists across reruns — if "messages" not in st.session_state: st.session_state.messages = []. Cache: @st.cache_data memoizes functions returning data (DataFrames, lists), @st.cache_resource caches singleton objects (models, DB connections). Chat: for msg in st.session_state.messages: st.chat_message(msg["role"]).write(msg["content"]). if prompt := st.chat_input(): .... Streaming: with st.chat_message("assistant"): st.write_stream(llm.stream(prompt)). Columns: col1, col2 = st.columns(2), with col1: st.image(img). Tabs: tab1, tab2 = st.tabs(["Train","Eval"]). Form: with st.form("settings"): lr = st.number_input("LR"); submitted = st.form_submit_button("Run"). Multi-page: pages/1_Overview.py, pages/2_Predictions.py convention. st.navigation([Page("app.py","Home"), Page("pages/predict.py","Predict")]). Deploy: push to GitHub → Streamlit Community Cloud at share.streamlit.io. Claude Code generates Streamlit dashboards for ML monitoring, model evaluation, data exploration, and LLM chatbots.

CLAUDE.md for Streamlit

## Streamlit Stack
- Version: streamlit >= 1.37
- Reruns on every widget interaction — design for idempotency
- State: st.session_state["key"] persists across reruns; init with if "key" not in st.session_state
- Cache: @st.cache_data (DataFrames, pure fns) / @st.cache_resource (models, connections)
- Chat: st.chat_input() + st.chat_message(role) + st.write_stream(generator)
- Layout: st.columns(N), st.tabs([...]), st.sidebar, st.expander
- Multi-page: pages/ directory or st.navigation([st.Page(...)])

Streamlit ML Dashboard

# app/streamlit_app.py — ML model monitoring and prediction dashboard
from __future__ import annotations
import io
import time
from typing import Generator

import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import streamlit as st


# ── Page config ───────────────────────────────────────────────────────────────

st.set_page_config(
    page_title="Claude Code ML Dashboard",
    page_icon="🤖",
    layout="wide",
    initial_sidebar_state="expanded",
)


# ── Cached resources ──────────────────────────────────────────────────────────

@st.cache_resource
def load_model():
    """Load ML model once and cache for all sessions."""
    from sklearn.ensemble import GradientBoostingClassifier
    from sklearn.datasets import make_classification
    import time

    X, y = make_classification(n_samples=2000, n_features=10, random_state=42)
    model = GradientBoostingClassifier(n_estimators=100, random_state=42)
    model.fit(X[:1600], y[:1600])
    return model, X[1600:], y[1600:]


@st.cache_data
def load_metrics_history() -> pd.DataFrame:
    """Simulated model metrics over time — replace with real DB/file loads."""
    dates = pd.date_range("2026-01-01", periods=30, freq="D")
    rng   = np.random.default_rng(42)
    return pd.DataFrame({
        "date":          dates,
        "train_auc":     0.88 + rng.normal(0, 0.01, 30).cumsum() * 0.01,
        "val_auc":       0.85 + rng.normal(0, 0.015, 30).cumsum() * 0.01,
        "data_drift":    rng.uniform(0, 0.3, 30),
        "prediction_vol": rng.integers(800, 1200, 30),
    })


# ── Helper: streaming LLM response ───────────────────────────────────────────

def stream_llm_response(prompt: str) -> Generator[str, None, None]:
    """Simulated streaming LLM — replace with real model."""
    response = f"Analysis of '{prompt[:30]}...': Based on the data, I recommend focusing on features with highest SHAP values. The model shows stable performance with no significant drift detected."
    for word in response.split():
        time.sleep(0.04)
        yield word + " "


# ── Sidebar ────────────────────────────────────────────────────────────────────

with st.sidebar:
    st.title("ML Dashboard")
    st.markdown("---")

    page = st.radio(
        "Navigation",
        ["Overview", "Predictions", "Data Explorer", "Chatbot"],
        label_visibility="collapsed",
    )

    st.markdown("---")
    st.markdown("**Model:** GradientBoosting v2.3")
    st.markdown("**Status:** 🟢 Healthy")

    threshold = st.slider("Decision Threshold", 0.1, 0.9, 0.5, 0.05)
    show_proba = st.checkbox("Show probabilities", value=True)


# ── Pages ─────────────────────────────────────────────────────────────────────

if page == "Overview":
    st.title("Model Overview")

    # KPI metrics row
    col1, col2, col3, col4 = st.columns(4)
    col1.metric("Val AUC",    "0.874", "+0.003")
    col2.metric("Daily Volume", "1,024", "+12%")
    col3.metric("Data Drift", "0.12", "-0.03", delta_color="inverse")
    col4.metric("Latency P99", "42ms", "-5ms")

    # Performance chart
    df_metrics = load_metrics_history()

    fig = go.Figure()
    fig.add_trace(go.Scatter(x=df_metrics["date"], y=df_metrics["train_auc"],
                             name="Train AUC", line=dict(color="royalblue")))
    fig.add_trace(go.Scatter(x=df_metrics["date"], y=df_metrics["val_auc"],
                             name="Val AUC",   line=dict(color="tomato")))
    fig.add_hline(y=0.80, line_dash="dash", line_color="gray", annotation_text="SLA")
    fig.update_layout(title="AUC Over Time", height=350, margin=dict(t=40, b=20))
    st.plotly_chart(fig, use_container_width=True)

    # Data drift
    with st.expander("Data Drift Details"):
        fig2 = px.bar(df_metrics, x="date", y="data_drift",
                      color="data_drift", color_continuous_scale="RdYlGn_r")
        fig2.add_hline(y=0.2, line_dash="dash", annotation_text="Alert threshold")
        st.plotly_chart(fig2, use_container_width=True)


elif page == "Predictions":
    st.title("Run Predictions")

    model, X_test, y_test = load_model()

    tab1, tab2 = st.tabs(["Single Prediction", "Batch Upload"])

    with tab1:
        st.markdown("Enter feature values for a single prediction.")
        with st.form("prediction_form"):
            cols = st.columns(5)
            features = []
            for i in range(10):
                with cols[i % 5]:
                    val = st.number_input(f"Feature {i+1}", value=0.0, format="%.3f")
                    features.append(val)
            submitted = st.form_submit_button("Predict", type="primary")

        if submitted:
            x = np.array(features).reshape(1, -1)
            proba = model.predict_proba(x)[0, 1]
            pred  = int(proba >= threshold)

            col_a, col_b = st.columns(2)
            col_a.metric("Prediction", "Positive ✅" if pred else "Negative ❌")
            if show_proba:
                col_b.metric("Probability", f"{proba:.3f}")

            # Gauge chart
            fig = go.Figure(go.Indicator(
                mode="gauge+number",
                value=proba,
                domain={"x": [0, 1], "y": [0, 1]},
                gauge={
                    "axis": {"range": [0, 1]},
                    "bar": {"color": "tomato" if proba >= threshold else "steelblue"},
                    "steps": [{"range": [0, threshold], "color": "lightblue"},
                              {"range": [threshold, 1], "color": "lightsalmon"}],
                    "threshold": {"value": threshold, "line": {"color": "black", "width": 3}},
                },
            ))
            fig.update_layout(height=200, margin=dict(t=20, b=10))
            st.plotly_chart(fig, use_container_width=True)

    with tab2:
        uploaded = st.file_uploader("Upload CSV (no header, 10 features)", type="csv")
        if uploaded:
            df_upload = pd.read_csv(uploaded, header=None)
            st.write(f"Loaded {len(df_upload)} rows")

            with st.spinner("Running predictions..."):
                time.sleep(0.5)
                probas  = model.predict_proba(df_upload.values)[:, 1]
                preds   = (probas >= threshold).astype(int)
                df_out  = df_upload.copy()
                df_out["probability"] = probas.round(4)
                df_out["prediction"]  = preds

            st.dataframe(df_out, use_container_width=True)

            csv = io.StringIO()
            df_out.to_csv(csv, index=False)
            st.download_button("Download Predictions", csv.getvalue(),
                               "predictions.csv", "text/csv")


elif page == "Data Explorer":
    st.title("Data Explorer")

    model, X_test, y_test = load_model()
    df_test = pd.DataFrame(X_test, columns=[f"f{i}" for i in range(10)])
    df_test["label"] = y_test
    df_test["score"] = model.predict_proba(X_test)[:, 1]

    col1, col2 = st.columns([1, 3])
    with col1:
        feat_x = st.selectbox("X axis", df_test.columns[:-2], index=0)
        feat_y = st.selectbox("Y axis", df_test.columns[:-2], index=1)
        color  = st.selectbox("Color",  ["label", "score"])

    with col2:
        fig = px.scatter(df_test, x=feat_x, y=feat_y, color=color,
                         opacity=0.6, height=400)
        st.plotly_chart(fig, use_container_width=True)

    with st.expander("Raw Data"):
        st.dataframe(df_test.head(100), use_container_width=True)


elif page == "Chatbot":
    st.title("Model Analysis Chatbot")
    st.caption("Ask questions about model performance or get recommendations.")

    # Initialize message history in session state
    if "messages" not in st.session_state:
        st.session_state.messages = []

    # Display full chat history
    for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.write(message["content"])

    # Chat input
    if prompt := st.chat_input("Ask about the model..."):
        # Add user message
        st.session_state.messages.append({"role": "user", "content": prompt})
        with st.chat_message("user"):
            st.write(prompt)

        # Stream assistant response
        with st.chat_message("assistant"):
            response = st.write_stream(stream_llm_response(prompt))

        st.session_state.messages.append({"role": "assistant", "content": response})

For the Gradio alternative when building ML demos specifically for HuggingFace Spaces with image/audio/video I/O and a purpose-built ChatInterface that requires less boilerplate than Streamlit’s chat primitives — Gradio’s component model is designed for single-turn input-output demos while Streamlit’s st.session_state persistence and multi-page routing make it better for multi-step workflows like model comparison dashboards, data exploration, and analytics applications that users spend time navigating. For the Dash/Plotly alternative when building production BI dashboards with multi-page apps, REST API backends, and enterprise authentication in an organization that already uses Plotly — Dash is the more production-grade choice for pure dashboards while Streamlit’s simpler mental model and faster iteration cycle make it the default for data scientists building ML demos and self-service analytics tools. The Claude Skills 360 bundle includes Streamlit skill sets covering ML dashboards, cache patterns, chatbot UIs with streaming, multi-page apps, form inputs, and Plotly chart integration. Start with the free tier to try data app generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free