TorchServe is PyTorch’s production model serving framework. pip install torchserve torch-model-archiver. Package a model: torch-model-archiver --model-name sentiment --version 1.0 --serialized-file model.pt --handler handler.py --extra-files index_to_name.json. Creates sentiment.mar. torchserve --start --model-store model_store/ --models sentiment=sentiment.mar serves on ports 8080 (inference) and 8081 (management). Handler class extends BaseHandler: override initialize(context) for model loading, preprocess(data) for input parsing, inference(data) for model forward pass, postprocess(output) for response formatting. @torch.jit.script or torch.export for optimized model packaging. Batch inference: --batch_size 8 --max_batch_delay 100 in config.properties or via Management API POST /models/{name}?batch_size=8&max_batch_delay=100. Model versioning: torch-model-archiver --version 2.0 then PUT /models/sentiment/2.0/set-default. Traffic split: PUT /models/sentiment?min_worker=2&initial_workers=2 and Management API for version weights. config.properties sets inference_address, management_address, metrics_address, number_of_netty_threads, job_queue_size, default_response_timeout. Metrics: metrics_address=http://0.0.0.0:8082 exposes Prometheus-compatible /metrics. Docker: official pytorch/torchserve:latest-gpu image. Kubernetes: Deployment + Service + HorizontalPodAutoscaler. Claude Code generates TorchServe handlers, model archiver scripts, config.properties, Docker setups, and TypeScript inference clients.
CLAUDE.md for TorchServe
## TorchServe Stack
- Version: torchserve >= 0.9, torch-model-archiver >= 0.9
- Handler: extend BaseHandler — initialize/preprocess/inference/postprocess
- Archive: torch-model-archiver --model-name --version --serialized-file --handler --extra-files
- Serve: torchserve --start --model-store ./model_store --models name=file.mar
- Inference: POST http://localhost:8080/predictions/{model_name}
- Management: GET/PUT/POST http://localhost:8081/models
- Config: config.properties — batch_size, max_batch_delay, number_of_netty_threads
- Metrics: http://localhost:8082/metrics (Prometheus format)
Custom Handler
# handler.py — TorchServe custom handler for sentiment classification
from __future__ import annotations
import json
import logging
import os
import time
from typing import Any
import torch
import torch.nn.functional as F
from ts.torch_handler.base_handler import BaseHandler
logger = logging.getLogger(__name__)
class SentimentHandler(BaseHandler):
"""
Custom TorchServe handler for HuggingFace sentiment classification.
Extends BaseHandler with tokenization preprocessing.
"""
def __init__(self):
super().__init__()
self.tokenizer = None
self.labels: list[str] = []
self.initialized = False
# ── Lifecycle ────────────────────────────────────────────────────────────
def initialize(self, context) -> None:
"""Load model and tokenizer. Called once per worker on startup."""
self.manifest = context.manifest
props = context.system_properties
model_dir = props.get("model_dir")
self.device = torch.device(
"cuda" if torch.cuda.is_available() and props.get("gpu_id") is not None else "cpu"
)
# Load serialized model (TorchScript or state_dict)
serialized_file = self.manifest["model"]["serializedFile"]
model_pt_path = os.path.join(model_dir, serialized_file)
self.model = torch.jit.load(model_pt_path, map_location=self.device)
self.model.eval()
# Load tokenizer (packaged as extra_files)
from transformers import AutoTokenizer
self.tokenizer = AutoTokenizer.from_pretrained(model_dir)
# Load label mapping from extra_files
label_file = os.path.join(model_dir, "index_to_name.json")
if os.path.exists(label_file):
with open(label_file) as f:
mapping = json.load(f)
self.labels = [mapping[str(i)] for i in range(len(mapping))]
else:
self.labels = ["NEGATIVE", "NEUTRAL", "POSITIVE"]
self.initialized = True
logger.info("SentimentHandler initialized on %s", self.device)
# ── Preprocessing ────────────────────────────────────────────────────────
def preprocess(self, data: list[dict]) -> dict[str, torch.Tensor]:
"""
Tokenize incoming text requests.
data is a list of {"body": {"text": "...", "max_length": 512}} dicts.
"""
texts = []
max_length = 512
for item in data:
body = item.get("body") or item.get("data")
if isinstance(body, (bytes, bytearray)):
body = json.loads(body.decode("utf-8"))
texts.append(body.get("text", ""))
if "max_length" in body:
max_length = min(body["max_length"], 512)
tokens = self.tokenizer(
texts,
max_length=max_length,
truncation=True,
padding=True,
return_tensors="pt",
)
return {k: v.to(self.device) for k, v in tokens.items()}
# ── Inference ────────────────────────────────────────────────────────────
def inference(self, inputs: dict[str, torch.Tensor]) -> torch.Tensor:
"""Run model forward pass."""
with torch.no_grad():
outputs = self.model(**inputs)
probs = F.softmax(outputs.logits, dim=-1)
return probs
# ── Postprocessing ───────────────────────────────────────────────────────
def postprocess(self, probs: torch.Tensor) -> list[dict]:
"""Format results as JSON-serializable dicts."""
results = []
for row in probs:
idx = int(row.argmax())
results.append({
"label": self.labels[idx] if idx < len(self.labels) else str(idx),
"score": round(float(row[idx]), 4),
"all_scores": {
self.labels[i]: round(float(s), 4)
for i, s in enumerate(row)
if i < len(self.labels)
},
})
return results
Model Archiver Script
# scripts/archive_model.py — package model for TorchServe
import subprocess
import json
import os
import shutil
from pathlib import Path
def export_torchscript(model_name: str = "cardiffnlp/twitter-roberta-base-sentiment-latest"):
"""Export HuggingFace model to TorchScript for TorchServe."""
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
print(f"Loading {model_name}...")
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# Trace with dummy input
dummy = tokenizer("Hello world", return_tensors="pt")
with torch.no_grad():
traced = torch.jit.trace(model, (dummy["input_ids"], dummy["attention_mask"]))
os.makedirs("model_artifacts", exist_ok=True)
torch.jit.save(traced, "model_artifacts/model.pt")
# Save tokenizer files
tokenizer.save_pretrained("model_artifacts/")
# Create label mapping
label_map = {
"0": "NEGATIVE",
"1": "NEUTRAL",
"2": "POSITIVE",
}
with open("model_artifacts/index_to_name.json", "w") as f:
json.dump(label_map, f)
print("Exported TorchScript model to model_artifacts/")
return "model_artifacts"
def archive_model(
model_dir: str = "model_artifacts",
version: str = "1.0",
model_store: str = "model_store",
) -> str:
"""Create .mar archive with torch-model-archiver."""
os.makedirs(model_store, exist_ok=True)
# Collect extra files (tokenizer vocab, config, etc.)
extra = [
f for f in Path(model_dir).glob("**/*")
if f.is_file() and f.name not in ("model.pt",)
and f.suffix in (".json", ".txt", ".model")
]
extra_files_arg = ",".join(str(f) for f in extra)
cmd = [
"torch-model-archiver",
"--model-name", "sentiment",
"--version", version,
"--serialized-file", f"{model_dir}/model.pt",
"--handler", "handler.py",
"--export-path", model_store,
"--force",
]
if extra_files_arg:
cmd += ["--extra-files", extra_files_arg]
print("Running:", " ".join(cmd))
subprocess.run(cmd, check=True)
mar_path = f"{model_store}/sentiment.mar"
print(f"Created: {mar_path}")
return mar_path
if __name__ == "__main__":
artifact_dir = export_torchscript()
archive_model(model_dir=artifact_dir)
print("Ready: torchserve --start --model-store model_store/ --models sentiment=sentiment.mar")
config.properties
# config.properties — TorchServe server configuration
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
# Threading
number_of_netty_threads=8
job_queue_size=1000
# Batching defaults (overridable per-model via Management API)
batch_size=8
max_batch_delay=50
# Timeouts
default_response_timeout=120
unregister_model_timeout=120
# Model store
model_store=/home/model-server/model-store
load_models=sentiment.mar
# Logging
vmargs=-Xmx4g -XX:MaxDirectMemorySize=512m -XX:ReservedCodeCacheSize=240m -XX:+UseContainerSupport
# Metrics
metrics_format=prometheus
enable_envvars_config=true
Docker Setup
# Dockerfile — TorchServe GPU container
FROM pytorch/torchserve:0.9.0-gpu
USER root
# Install extra Python deps
RUN pip install --no-cache-dir \
transformers>=4.40.0 \
sentencepiece \
protobuf
# Copy artifacts
COPY model_store/ /home/model-server/model-store/
COPY handler.py /home/model-server/handler.py
COPY config.properties /home/model-server/config.properties
USER model-server
EXPOSE 8080 8081 8082
CMD ["torchserve", \
"--start", \
"--model-store", "/home/model-server/model-store", \
"--ts-config", "/home/model-server/config.properties", \
"--foreground"]
TypeScript Client
// lib/torchserve/client.ts — TypeScript client for TorchServe
const TORCHSERVE_URL = process.env.TORCHSERVE_URL ?? "http://localhost:8080"
const MGMT_URL = process.env.TORCHSERVE_MGMT_URL ?? "http://localhost:8081"
export type SentimentResult = {
label: string
score: number
all_scores: Record<string, number>
}
export async function predict(text: string): Promise<SentimentResult> {
const res = await fetch(`${TORCHSERVE_URL}/predictions/sentiment`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text }),
})
if (!res.ok) throw new Error(`TorchServe ${res.status}: ${await res.text()}`)
return res.json()
}
export async function predictBatch(texts: string[]): Promise<SentimentResult[]> {
const results = await Promise.all(texts.map(text => predict(text)))
return results
}
/** Management API — list registered models */
export async function listModels(): Promise<{ models: { modelName: string; modelVersion: string }[] }> {
const res = await fetch(`${MGMT_URL}/models`)
return res.json()
}
/** Management API — scale workers for a model */
export async function scaleWorkers(modelName: string, minWorkers: number, maxWorkers: number): Promise<void> {
const url = `${MGMT_URL}/models/${modelName}?min_worker=${minWorkers}&max_worker=${maxWorkers}`
const res = await fetch(url, { method: "PUT" })
if (!res.ok) throw new Error(`Scale failed: ${await res.text()}`)
}
/** Management API — set default version for A/B routing */
export async function setDefaultVersion(modelName: string, version: string): Promise<void> {
const res = await fetch(`${MGMT_URL}/models/${modelName}/${version}/set-default`, { method: "PUT" })
if (!res.ok) throw new Error(`Version set failed: ${await res.text()}`)
}
For the Ray Serve alternative when needing Python-native multi-model pipelines with actor-based horizontal scaling, dynamic autoscaling, and DeploymentHandle composition across a Ray cluster — Ray Serve handles complex DAGs of models while TorchServe is specifically optimized for PyTorch model packaging, TorchScript/TensorRT optimization, and the model archiver workflow that creates self-contained .mar artifacts from trained models. For the Triton Inference Server alternative when needing NVIDIA’s maximum-throughput inference server with TensorRT engine optimization, concurrent model execution, and multi-framework support (ONNX, TensorFlow SavedModel, TensorRT, PyTorch TorchScript) in a single server — Triton maximizes raw GPU throughput for production-scale serving while TorchServe provides a simpler Python-idiomatic API suitable for PyTorch-only deployments. The Claude Skills 360 bundle includes TorchServe skill sets covering custom handlers, model archiving, batching configuration, and Docker deployment. Start with the free tier to try PyTorch serving generation.