Claude Code for Chainlit: LLM Chat Applications — Claude Skills 360 Blog
Blog / AI / Claude Code for Chainlit: LLM Chat Applications
AI

Claude Code for Chainlit: LLM Chat Applications

Published: October 3, 2027
Read time: 5 min read
By: Claude Skills 366

Chainlit builds production LLM chat interfaces. pip install chainlit. chainlit run app.py. import chainlit as cl. Handle messages: @cl.on_message\nasync def main(message: cl.Message):\n await cl.Message(content=f"Echo: {message.content}").send(). Session init: @cl.on_chat_start\nasync def start(): cl.user_session.set("history", []). Session get: history = cl.user_session.get("history"). Streaming: msg = cl.Message(content=""); async for chunk in llm.astream(prompt): await msg.stream_token(chunk); await msg.update(). Elements: await cl.Message(content="Here's the chart", elements=[cl.Image(path="chart.png", name="chart", display="inline")]).send(). Steps for agent reasoning: async with cl.Step(name="Search") as step: step.input = query; results = await search(query); step.output = results. Actions: actions = [cl.Action(name="thumbs_up", label="👍", value="positive")], await cl.Message(content="How was this?", actions=actions).send(). @cl.action_callback("thumbs_up") async def on_thumbs_up(action): await cl.Message(content="Thanks!").send(). File upload: files = await cl.AskFileMessage(content="Upload PDF", accept=["application/pdf"]).send(). LangChain: from chainlit.langchain.callbacks import ChainlitCallbackHandler, cb = ChainlitCallbackHandler(), chain.invoke({"question": prompt}, config={"callbacks": [cb]}). Auth: @cl.password_auth_callback\ndef auth_callback(username, password): return cl.User(identifier=username) if valid else None. Config: config.toml controls [UI] name="My Bot", [features] prompt_playground=true. Claude Code generates Chainlit apps for RAG chatbots, multi-step agents, authenticated enterprise bots, and LangChain pipeline UIs.

CLAUDE.md for Chainlit

## Chainlit Stack
- Version: chainlit >= 1.2
- Run: chainlit run app.py (auto-reloads, port 8000)
- Messages: @cl.on_message async fn(message: cl.Message) → await cl.Message(content).send()
- Session: cl.user_session.set/get — per-user, per-session storage
- Streaming: cl.Message(""); await msg.stream_token(chunk); await msg.update()
- Steps: async with cl.Step(name) as step: step.input=x; ...; step.output=y
- Actions: cl.Action(name,label,value) in message.actions; @cl.action_callback(name)
- Auth: @cl.password_auth_callback or @cl.oauth_callback returns cl.User or None
- Config: .chainlit/config.toml — UI name, features, theme

Chainlit Chat Application

# app/chainlit_app.py — production LLM chat app with RAG and agent steps
from __future__ import annotations
import asyncio
import os
from pathlib import Path

import chainlit as cl
from chainlit.types import ThreadDict


# ── Configuration ─────────────────────────────────────────────────────────────

SYSTEM_PROMPT = os.environ.get(
    "SYSTEM_PROMPT",
    "You are a knowledgeable AI assistant. Be concise and helpful.",
)
MODEL = os.environ.get("LLM_MODEL", "claude-sonnet-4-6")


# ── Auth (optional — remove block to allow anonymous access) ─────────────────

@cl.password_auth_callback
def auth_callback(username: str, password: str) -> cl.User | None:
    """Simple password auth — replace with real user store."""
    credentials = {
        "admin":  ("adminpass", {"role": "admin"}),
        "user1":  ("pass123",   {"role": "user"}),
    }
    if username in credentials:
        expected_pass, metadata = credentials[username]
        if password == expected_pass:
            return cl.User(identifier=username, metadata=metadata)
    return None


# ── Session lifecycle ─────────────────────────────────────────────────────────

@cl.on_chat_start
async def on_chat_start() -> None:
    """Initialize session state and greet user."""
    user = cl.user_session.get("user")
    name = user.identifier if user else "there"

    # Initialize message history in session
    cl.user_session.set("history", [
        {"role": "system", "content": SYSTEM_PROMPT}
    ])
    cl.user_session.set("uploaded_docs", [])
    cl.user_session.set("token_count", 0)

    # Set a persona avatar
    cl.user_session.set("chat_profile", "assistant")

    await cl.Message(
        content=f"Hello, **{name}**! I'm your AI assistant. You can:\n"
                "- Ask me anything\n"
                "- Upload files for analysis (PDF, text, CSV)\n"
                "- Use the buttons below for quick actions",
        actions=[
            cl.Action(name="clear_history", label="🗑️ Clear History", value="clear"),
            cl.Action(name="show_stats",   label="📊 Show Stats",   value="stats"),
        ],
    ).send()


@cl.on_chat_end
async def on_chat_end() -> None:
    """Cleanup when chat session ends."""
    pass


@cl.on_chat_resume
async def on_chat_resume(thread: ThreadDict) -> None:
    """Restore history when resuming a stored thread."""
    history = []
    for step in thread.get("steps", []):
        if step["type"] == "user_message":
            history.append({"role": "user",      "content": step["output"]})
        elif step["type"] == "assistant_message":
            history.append({"role": "assistant", "content": step["output"]})
    cl.user_session.set("history", [{"role": "system", "content": SYSTEM_PROMPT}] + history)


# ── Action callbacks ──────────────────────────────────────────────────────────

@cl.action_callback("clear_history")
async def on_clear_history(action: cl.Action) -> None:
    cl.user_session.set("history", [{"role": "system", "content": SYSTEM_PROMPT}])
    await cl.Message(content="History cleared. Fresh start!").send()
    await action.remove()


@cl.action_callback("show_stats")
async def on_show_stats(action: cl.Action) -> None:
    history      = cl.user_session.get("history", [])
    token_count  = cl.user_session.get("token_count", 0)
    docs         = cl.user_session.get("uploaded_docs", [])

    stats_md = (
        f"**Session Statistics**\n"
        f"- Messages: {len(history) - 1}\n"
        f"- Est. tokens used: {token_count:,}\n"
        f"- Documents uploaded: {len(docs)}\n"
    )
    await cl.Message(content=stats_md).send()


# ── Main message handler ──────────────────────────────────────────────────────

@cl.on_message
async def on_message(message: cl.Message) -> None:
    """Handle incoming message with optional file attachments."""
    history = cl.user_session.get("history", [])

    # Handle file attachments
    attachment_context = ""
    if message.elements:
        async with cl.Step(name="Processing attachments") as step:
            step.input = f"{len(message.elements)} file(s)"
            texts = []
            for elem in message.elements:
                if hasattr(elem, "path") and elem.path:
                    text = await extract_file_content(elem.path, elem.name or "")
                    if text:
                        texts.append(f"[{elem.name}]:\n{text[:2000]}")
            attachment_context = "\n\n".join(texts)
            step.output = f"Extracted {len(texts)} document(s)"

    # Build user message with optional attachment context
    user_content = message.content
    if attachment_context:
        user_content = f"{user_content}\n\nAttached content:\n{attachment_context}"

    history.append({"role": "user", "content": user_content})

    # Multi-step agent: planning → retrieval → answer
    if _is_complex_query(message.content):
        await handle_with_steps(message.content, history)
    else:
        await handle_simple(history)

    cl.user_session.set("history", history)


async def handle_simple(history: list[dict]) -> None:
    """Direct LLM call with streaming response."""
    response_text = ""
    msg = cl.Message(content="")
    await msg.send()

    # Simulate streaming (replace with real LLM call)
    words = "Here is a helpful response to your question about the topic you asked.".split()
    for word in words:
        await msg.stream_token(word + " ")
        await asyncio.sleep(0.03)
        response_text += word + " "

    await msg.update()
    history.append({"role": "assistant", "content": response_text.strip()})

    # Update token estimate
    token_count = cl.user_session.get("token_count", 0)
    cl.user_session.set("token_count", token_count + len(response_text.split()) * 2)


async def handle_with_steps(query: str, history: list[dict]) -> None:
    """Multi-step reasoning with visible steps."""
    # Step 1: Planning
    async with cl.Step(name="Planning", type="tool") as step:
        step.input  = query
        step.output = "Breaking down into: 1) Retrieve context, 2) Synthesize answer"

    # Step 2: Retrieval (simulated)
    async with cl.Step(name="Retrieval", type="retrieval") as step:
        step.input  = query
        await asyncio.sleep(0.3)
        step.output = "Found 3 relevant passages"

    # Step 3: Final answer with streaming
    async with cl.Step(name="Generating answer") as step:
        step.input = query
        response   = "Based on the retrieved context, here is a comprehensive answer..."
        step.output = response

    # Show final message outside steps
    msg = cl.Message(content="")
    await msg.send()
    for word in response.split():
        await msg.stream_token(word + " ")
        await asyncio.sleep(0.02)
    await msg.update()

    history.append({"role": "assistant", "content": response})

    # Offer feedback actions
    await cl.Message(
        content="Was this helpful?",
        actions=[
            cl.Action(name="feedback_good", label="👍 Yes",        value="good"),
            cl.Action(name="feedback_bad",  label="👎 Not quite",  value="bad"),
        ],
    ).send()


@cl.action_callback("feedback_good")
async def feedback_good(action: cl.Action):
    await cl.Message(content="Thanks for the positive feedback!").send()
    await action.remove()


@cl.action_callback("feedback_bad")
async def feedback_bad(action: cl.Action):
    await cl.Message(content="Thanks for the feedback. I'll try to improve!").send()
    await action.remove()


# ── Helpers ───────────────────────────────────────────────────────────────────

async def extract_file_content(path: str, name: str) -> str:
    """Extract text from uploaded file."""
    p = Path(path)
    suffix = p.suffix.lower()
    try:
        if suffix == ".txt" or suffix == ".md":
            return p.read_text(encoding="utf-8", errors="ignore")
        elif suffix == ".csv":
            import pandas as pd
            df = pd.read_csv(p)
            return f"CSV with {len(df)} rows, columns: {list(df.columns)}\n{df.head(5).to_string()}"
        elif suffix == ".pdf":
            # Requires pypdf
            from pypdf import PdfReader
            reader = PdfReader(str(p))
            return "\n".join(page.extract_text() or "" for page in reader.pages[:5])
    except Exception as e:
        return f"[Could not extract content from {name}: {e}]"
    return ""


def _is_complex_query(text: str) -> bool:
    """Detect queries that benefit from multi-step reasoning."""
    keywords = ["explain", "compare", "analyze", "how does", "why does", "what is the difference"]
    return any(kw in text.lower() for kw in keywords)

Chainlit config at .chainlit/config.toml:

[project]
enable_telemetry = false

[features]
prompt_playground = true
multi_modal = true
spontaneous_file_upload = true

[UI]
name = "AI Assistant"
description = "Powered by Claude Code"
default_collapse_content = true
hide_cot = false

[meta]
generated_by = "1.2.0"

For the Streamlit alternative when needing richer data visualization with Plotly charts, dataframe manipulation, and multi-page analytics dashboards — Streamlit excels at data apps while Chainlit’s cl.Step for transparent agent reasoning traces, cl.Action for interactive feedback buttons, cl.on_chat_resume for persistent conversation threads, and native LangChain/LlamaIndex callback handler integration make it the better choice for production LLM chatbots where observability into the reasoning process matters. For the Gradio ChatInterface alternative when prioritizing HuggingFace Spaces deployment and a simpler API — Gradio’s ChatInterface requires fewer decorators while Chainlit’s session persistence, auth callbacks, file upload handling, and thread storage make it more suitable for enterprise deployments where users expect stateful conversations and access control. The Claude Skills 360 bundle includes Chainlit skill sets covering streaming responses, multi-step agent traces, Actions, file uploads, authentication, LangChain integration, and config.toml branding. Start with the free tier to try LLM chat app generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free