Claude Code for LMDeploy: Efficient LLM Deployment — Claude Skills 360 Blog
Blog / AI / Claude Code for LMDeploy: Efficient LLM Deployment
AI

Claude Code for LMDeploy: Efficient LLM Deployment

Published: October 14, 2027
Read time: 5 min read
By: Claude Skills 360

LMDeploy deploys LLMs with TurboMind engine for high throughput. pip install lmdeploy. CLI chat: lmdeploy chat meta-llama/Llama-3.1-8B-Instruct. Convert + quantize W4A16 AWQ: lmdeploy lite auto_awq meta-llama/Llama-3.1-8B-Instruct --work-dir ./llama3-awq-4bit. Serve API: lmdeploy serve api_server ./llama3-awq-4bit --server-port 23333 --tp 2 — OpenAI-compatible at http://localhost:23333. Python pipeline: from lmdeploy import pipeline, TurbomindEngineConfig, pipe = pipeline("meta-llama/Llama-3.1-8B-Instruct", backend_config=TurbomindEngineConfig(max_batch_size=64, cache_max_entry_count=0.8)). response = pipe("What is LMDeploy?"). Batch: responses = pipe(["Question 1", "Question 2"]). Chat: pipe([{"role":"user","content":"Hello"}]). from lmdeploy import GenerationConfig, gen_config = GenerationConfig(max_new_tokens=512, temperature=0.7, top_p=0.9, repetition_penalty=1.05), response = pipe(prompt, gen_config=gen_config). Tensor parallel: TurbomindEngineConfig(tp=4) for 4-GPU inference. INT8: lmdeploy lite kv_qparams ./model --work-dir ./model-kv-int8. W8A8: lmdeploy lite smooth_quant ./model --work-dir ./model-w8a8. Vision model: from lmdeploy.vl import load_image, pipe = pipeline("OpenGVLab/InternVL2-8B"), image = load_image("image.jpg"), response = pipe(("Describe this image", image)). from lmdeploy.serve.openai.api_client import APIClient, client = APIClient("http://localhost:23333"), model_name = client.available_models[0], for chunk in client.chat_completions_v1(...). Claude Code generates LMDeploy pipelines, quantization scripts, API server configs, and vision model inference code.

CLAUDE.md for LMDeploy

## LMDeploy Stack
- Version: lmdeploy >= 0.6
- Engine: TurbomindEngineConfig(max_batch_size, cache_max_entry_count, tp)
- Pipeline: pipeline(model_path, backend_config=TurbomindEngineConfig(...))
- Call: pipe(prompt) | pipe([prompt1, prompt2]) | pipe([{"role":"user","content":"..."}])
- GenConfig: GenerationConfig(max_new_tokens, temperature, top_p, repetition_penalty)
- Quantize: lmdeploy lite auto_awq model_path --work-dir output (W4A16)
- Serve: lmdeploy serve api_server model_path --server-port 23333 --tp N
- Vision: pipeline("InternVL2-8B") + pipe(("text prompt", load_image(path)))

LMDeploy Inference Pipeline

# inference/lmdeploy_pipeline.py — efficient LLM deployment with LMDeploy TurboMind
from __future__ import annotations
import os
import time
from pathlib import Path
from typing import Generator

from lmdeploy import (
    pipeline,
    GenerationConfig,
    TurbomindEngineConfig,
)
from lmdeploy.messages import Response


# ── 1. Pipeline setup ─────────────────────────────────────────────────────────

def build_pipeline(
    model_path:      str   = "meta-llama/Llama-3.1-8B-Instruct",
    max_batch_size:  int   = 128,
    tp:              int   = 1,
    cache_ratio:     float = 0.8,
    quant_policy:    int   = 0,    # 0=none, 4=W4A16-AWQ, 8=KV-int8
) -> "pipeline":
    """
    Build TurboMind pipeline with PagedAttention.
    cache_ratio: fraction of GPU memory reserved for KV cache.
    quant_policy: 0=fp16, 4=awq-4bit, 8=kv-cache-int8.
    """
    backend_config = TurbomindEngineConfig(
        max_batch_size=max_batch_size,
        cache_max_entry_count=cache_ratio,
        tp=tp,
        quant_policy=quant_policy,
        rope_scaling_factor=1.0,
        session_len=4096,      # Max sequence length per session
    )
    pipe = pipeline(model_path, backend_config=backend_config)
    print(f"Pipeline ready: {model_path} | tp={tp} | batch={max_batch_size}")
    return pipe


def build_pipeline_quantized(
    awq_model_path: str = "./llama3-awq-4bit",
    tp:             int = 1,
) -> "pipeline":
    """Load a pre-quantized AWQ model for 2-4x memory savings."""
    backend_config = TurbomindEngineConfig(
        max_batch_size=256,
        cache_max_entry_count=0.85,
        tp=tp,
        quant_policy=4,    # AWQ 4-bit weights
    )
    return pipeline(awq_model_path, backend_config=backend_config)


# ── 2. Generation config ──────────────────────────────────────────────────────

def make_gen_config(
    max_tokens:  int   = 512,
    temperature: float = 0.7,
    top_p:       float = 0.9,
    top_k:       int   = 50,
    repetition:  float = 1.05,
) -> GenerationConfig:
    return GenerationConfig(
        max_new_tokens=max_tokens,
        temperature=temperature,
        top_p=top_p,
        top_k=top_k,
        repetition_penalty=repetition,
    )


GREEDY_CONFIG = GenerationConfig(max_new_tokens=512, temperature=0.0, top_p=1.0)
CREATIVE_CONFIG = GenerationConfig(max_new_tokens=1024, temperature=0.9, top_p=0.95)
PRECISE_CONFIG  = GenerationConfig(max_new_tokens=256, temperature=0.1, top_p=0.9)


# ── 3. Single and batch inference ─────────────────────────────────────────────

def chat(pipe, prompt: str, gen_config: GenerationConfig = None) -> str:
    """Single-turn text generation."""
    response: Response = pipe(
        prompt,
        gen_config=gen_config or GREEDY_CONFIG,
    )
    return response.text


def chat_turns(pipe, messages: list[dict], gen_config: GenerationConfig = None) -> str:
    """
    Multi-turn chat with message history.
    messages: [{"role": "user"|"assistant"|"system", "content": "..."}]
    """
    response: Response = pipe(
        messages,
        gen_config=gen_config or GREEDY_CONFIG,
    )
    return response.text


def batch_inference(
    pipe,
    prompts:    list[str],
    gen_config: GenerationConfig = None,
) -> list[str]:
    """
    Batch inference — LMDeploy schedules all prompts concurrently.
    Throughput scales with batch size up to max_batch_size.
    """
    responses: list[Response] = pipe(
        prompts,
        gen_config=gen_config or GREEDY_CONFIG,
    )
    return [r.text for r in responses]


def batch_chat(
    pipe,
    conversations: list[list[dict]],
    gen_config: GenerationConfig = None,
) -> list[str]:
    """Batch multi-turn chat — each item is a conversation history."""
    responses = pipe(conversations, gen_config=gen_config or GREEDY_CONFIG)
    return [r.text for r in responses]


# ── 4. Streaming generation ───────────────────────────────────────────────────

def stream_response(
    pipe,
    prompt:     str,
    gen_config: GenerationConfig = None,
) -> Generator[str, None, None]:
    """Stream tokens as they're generated."""
    for response in pipe.stream_infer(
        prompt,
        gen_config=gen_config or make_gen_config(max_tokens=512),
    ):
        if response.text:
            yield response.text


def print_stream(pipe, prompt: str):
    """Demo: print streaming output."""
    print(f"Q: {prompt}\nA: ", end="", flush=True)
    for chunk in stream_response(pipe, prompt):
        print(chunk, end="", flush=True)
    print()


# ── 5. Document processing pipeline ──────────────────────────────────────────

def summarize_documents(
    pipe,
    documents: list[str],
    max_words:  int = 50,
) -> list[str]:
    """Batch document summarization — efficient with continuous batching."""
    prompts = [
        f"Summarize the following document in at most {max_words} words:\n\n{doc}\n\nSummary:"
        for doc in documents
    ]
    return batch_inference(pipe, prompts, PRECISE_CONFIG)


def classify_texts(
    pipe,
    texts:  list[str],
    labels: list[str],
) -> list[str]:
    """
    Batch text classification.
    Returns one label per text — post-process by matching label names in output.
    """
    label_str = ", ".join(f'"{l}"' for l in labels)
    prompts = [
        f"Classify the following text into exactly one of these categories: {label_str}.\n"
        f"Text: {text}\n"
        f"Category (respond with just the category name):"
        for text in texts
    ]
    raw_outputs = batch_inference(
        pipe,
        prompts,
        GenerationConfig(max_new_tokens=10, temperature=0.0),
    )
    # Match output to closest label
    results = []
    for output in raw_outputs:
        output_lower = output.strip().lower()
        matched = next(
            (l for l in labels if l.lower() in output_lower),
            labels[0],   # Default to first label if no match
        )
        results.append(matched)
    return results


def extract_structured(
    pipe,
    texts:  list[str],
    schema: str,
) -> list[str]:
    """Extract structured info as JSON strings."""
    prompts = [
        f"Extract information matching this JSON schema:\n{schema}\n\n"
        f"Text: {text}\n\nJSON:"
        for text in texts
    ]
    return batch_inference(pipe, prompts, GenerationConfig(
        max_new_tokens=256,
        temperature=0.0,
    ))


# ── 6. Vision model pipeline ──────────────────────────────────────────────────

def build_vision_pipeline(
    model_path: str = "OpenGVLab/InternVL2-8B",
    tp:         int = 1,
) -> "pipeline":
    """Build a vision-language model pipeline."""
    backend_config = TurbomindEngineConfig(
        max_batch_size=32,
        cache_max_entry_count=0.75,
        tp=tp,
    )
    return pipeline(model_path, backend_config=backend_config)


def describe_image(vision_pipe, image_path: str) -> str:
    """Generate image description."""
    from lmdeploy.vl import load_image
    image = load_image(image_path)
    response = vision_pipe(
        ("Describe this image in detail, noting key objects, colors, and any text.", image)
    )
    return response.text


def visual_qa(vision_pipe, image_path: str, question: str) -> str:
    """Answer a question about an image."""
    from lmdeploy.vl import load_image
    image = load_image(image_path)
    response = vision_pipe((question, image))
    return response.text


def batch_image_captioning(
    vision_pipe,
    image_paths: list[str],
) -> list[str]:
    """Generate captions for multiple images in parallel."""
    from lmdeploy.vl import load_image
    inputs = [
        ("Generate a concise one-sentence caption for this image.", load_image(p))
        for p in image_paths
    ]
    responses = vision_pipe(inputs)
    return [r.text for r in responses]


# ── 7. Benchmark ──────────────────────────────────────────────────────────────

def benchmark_throughput(
    pipe,
    prompt:      str  = "Explain the attention mechanism in transformers.",
    batch_sizes: list = [1, 8, 32, 64],
    n_iters:     int  = 3,
):
    """Benchmark tokens/sec at different batch sizes."""
    print("\n=== LMDeploy Throughput Benchmark ===")
    print(f"{'Batch':>8} {'Tokens/s':>12} {'Latency(ms)':>14}")
    print("-" * 38)

    gen_config = GenerationConfig(max_new_tokens=64, temperature=0.0)

    for batch_size in batch_sizes:
        prompts = [prompt] * batch_size
        latencies = []

        for _ in range(n_iters):
            t0 = time.perf_counter()
            responses = pipe(prompts, gen_config=gen_config)
            elapsed = time.perf_counter() - t0
            latencies.append(elapsed)

        avg_latency = sum(latencies) / len(latencies)
        total_tokens = sum(len(r.text.split()) for r in responses) * batch_size
        tps = total_tokens / avg_latency

        print(f"{batch_size:>8} {tps:>12.0f} {avg_latency*1000:>14.1f}")


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    # Build pipeline (adjust model_path for your environment)
    pipe = build_pipeline(
        model_path="meta-llama/Llama-3.1-8B-Instruct",
        max_batch_size=64,
        tp=1,
    )

    # Single chat
    answer = chat(pipe, "What is PagedAttention and why does it improve LLM throughput?")
    print(f"Answer: {answer[:150]}...")

    # Batch inference
    questions = [
        "What is quantization in neural networks?",
        "Explain continuous batching for LLM serving.",
        "What is the difference between AWQ and GPTQ quantization?",
    ]
    answers = batch_inference(pipe, questions)
    for q, a in zip(questions, answers):
        print(f"\nQ: {q}\nA: {a[:100]}...")

    # Document summarization
    docs = [
        "LMDeploy is an inference toolkit for compressing and deploying large language models. "
        "It provides TurboMind engine with PagedAttention for high throughput and low latency. "
        "Features include AWQ quantization, tensor parallelism, and vision model support.",
    ]
    summaries = summarize_documents(pipe, docs, max_words=20)
    print(f"\nSummary: {summaries[0]}")

    # Benchmark
    benchmark_throughput(pipe, batch_sizes=[1, 4, 16])

For the vLLM alternative when needing the broadest model format support including GGUF, GPTQ, and Marlin kernels with the largest open-source community and most active LoRA hot-swapping features — vLLM covers the widest compatibility matrix while LMDeploy’s TurboMind engine with W4A16 AWQ and KV-cache INT8 quantization offers better memory efficiency specifically for Qwen, InternLM, and LLaMA architectures deployed on limited VRAM. For the SGLang alternative when needing complex multi-step generation programs with fork/join parallelism and RadixAttention for shared-prefix workloads — SGLang provides a higher-level programming model while LMDeploy’s lmdeploy lite auto_awq one-command quantization pipeline and native vision-language model support (InternVL2) make it the faster path to production for teams optimizing inference cost with quantized models. The Claude Skills 360 bundle includes LMDeploy skill sets covering TurbomindEngineConfig setup, AWQ quantization, batch pipeline inference, streaming, vision model support, API server deployment, and throughput benchmarking. Start with the free tier to try efficient LLM deployment code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free