Claude Code for Machine Learning: Model Integration, Fine-tuning, and ML Pipelines — Claude Skills 360 Blog
Blog / Development / Claude Code for Machine Learning: Model Integration, Fine-tuning, and ML Pipelines
Development

Claude Code for Machine Learning: Model Integration, Fine-tuning, and ML Pipelines

Published: July 26, 2026
Read time: 9 min read
By: Claude Skills 360

Machine learning in production means more than model training — it means integrating model APIs correctly, building embedding pipelines, handling model versioning, and serving predictions fast enough for real users. Claude Code generates the integration code, evaluation pipelines, and serving infrastructure that connects ML models to applications.

LLM API Integration

Build a content moderation system using the OpenAI API.
Returns: approved/rejected/needs-review with confidence and reason.
Handles rate limits with exponential backoff. Caches results by content hash.
// services/ContentModerationService.ts
import OpenAI from 'openai';
import { createHash } from 'crypto';
import { Redis } from 'ioredis';

interface ModerationResult {
  decision: 'approved' | 'rejected' | 'needs-review';
  confidence: number;
  reason: string;
  categories: string[];
}

export class ContentModerationService {
  private client: OpenAI;
  private cache: Redis;
  private readonly CACHE_TTL = 86400; // 24h — same content gets same result

  constructor(apiKey: string, redis: Redis) {
    this.client = new OpenAI({ apiKey });
    this.cache = redis;
  }

  async moderate(content: string): Promise<ModerationResult> {
    // Check cache first
    const cacheKey = `moderation:${createHash('sha256').update(content).digest('hex')}`;
    const cached = await this.cache.get(cacheKey);
    if (cached) return JSON.parse(cached);

    // Use OpenAI's moderation API for free, fast, first pass
    const modResult = await this.client.moderations.create({ input: content });
    const flagged = modResult.results[0];

    if (flagged.flagged) {
      const categories = Object.entries(flagged.categories)
        .filter(([, value]) => value)
        .map(([key]) => key);

      const result: ModerationResult = {
        decision: 'rejected',
        confidence: 0.95,
        reason: `Flagged for: ${categories.join(', ')}`,
        categories,
      };
      await this.cache.setex(cacheKey, this.CACHE_TTL, JSON.stringify(result));
      return result;
    }

    // Not flagged by basic moderation — use GPT for nuanced cases
    const result = await this.classifyWithGPT(content, cacheKey);
    return result;
  }

  private async classifyWithGPT(content: string, cacheKey: string): Promise<ModerationResult> {
    const response = await this.client.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [
        {
          role: 'system',
          content: `You are a content moderation system for a professional platform.
Classify user-generated content as:
- "approved": appropriate, professional, on-topic
- "needs-review": borderline, unclear intent, or requires human judgment
- "rejected": spam, harassment, off-topic, or inappropriate

Respond in JSON: {"decision": "...", "confidence": 0.0-1.0, "reason": "brief explanation", "categories": ["list", "of", "concerns"]}`,
        },
        { role: 'user', content: `Content to moderate:\n\n${content.slice(0, 2000)}` },
      ],
      response_format: { type: 'json_object' },
      temperature: 0.1, // Low temperature for consistent classification
    });

    const result = JSON.parse(response.choices[0].message.content!) as ModerationResult;
    await this.cache.setex(cacheKey, this.CACHE_TTL, JSON.stringify(result));
    return result;
  }
}
Build semantic search over our product catalog using embeddings.
When users search "comfortable shoes for standing all day", 
return products by semantic similarity, not just keyword matching.
// services/EmbeddingService.ts
import OpenAI from 'openai';
import { db } from '../db';

const openai = new OpenAI();

export async function generateEmbedding(text: string): Promise<number[]> {
  const response = await openai.embeddings.create({
    model: 'text-embedding-3-small',
    input: text.slice(0, 8000), // Max tokens
    dimensions: 1536,
  });
  return response.data[0].embedding;
}

// Index products (run once, re-run when products change)
export async function indexProducts() {
  const products = await db('products')
    .where('embedding_updated_at', '<', db.raw('updated_at'))
    .orWhereNull('embedding_updated_at')
    .select('id', 'name', 'description', 'category');

  console.log(`Indexing ${products.length} products...`);

  for (const product of products) {
    const text = `${product.name}. ${product.description}. Category: ${product.category}`;
    const embedding = await generateEmbedding(text);

    // Store as PostgreSQL vector (requires pgvector extension)
    await db('products').where('id', product.id).update({
      embedding: JSON.stringify(embedding),
      embedding_updated_at: new Date(),
    });

    await new Promise(r => setTimeout(r, 50)); // Rate limit
  }
}

// Semantic search query
export async function semanticSearch(query: string, limit = 10) {
  const queryEmbedding = await generateEmbedding(query);

  // pgvector cosine distance operator: <=>
  const results = await db.raw(`
    SELECT 
      id, name, description, price_cents, category,
      1 - (embedding::vector <=> ?::vector) AS similarity
    FROM products
    WHERE embedding IS NOT NULL
    ORDER BY embedding::vector <=> ?::vector
    LIMIT ?
  `, [JSON.stringify(queryEmbedding), JSON.stringify(queryEmbedding), limit]);

  return results.rows.filter((r: any) => r.similarity > 0.7); // Filter low similarity
}

scikit-learn Classification Pipeline

Build a churn prediction model using our customer data.
Features: days since last purchase, total orders, avg order value, support tickets.
Train, evaluate, and export for serving.
# ml/train_churn_model.py
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import classification_report, roc_auc_score
import joblib

# Feature engineering
def build_features(df: pd.DataFrame) -> pd.DataFrame:
    features = df.copy()
    features['days_since_purchase'] = (
        pd.Timestamp.now() - pd.to_datetime(features['last_purchase_date'])
    ).dt.days
    features['purchase_frequency'] = features['total_orders'] / (
        (pd.Timestamp.now() - pd.to_datetime(features['customer_since'])).dt.days / 30
    )
    features['avg_order_value'] = features['total_revenue'] / features['total_orders'].clip(1)
    return features[['days_since_purchase', 'purchase_frequency', 'total_orders', 
                      'avg_order_value', 'support_ticket_count', 'email_open_rate']]

# Load and prepare data
df = pd.read_csv('data/customers.csv')
X = build_features(df)
y = df['churned'].astype(int)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)

# Pipeline: scale + classify
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('classifier', GradientBoostingClassifier(
        n_estimators=200,
        max_depth=4,
        learning_rate=0.05,
        subsample=0.8,
        random_state=42,
    )),
])

# Train
pipeline.fit(X_train, y_train)

# Evaluate
y_pred = pipeline.predict(X_test)
y_prob = pipeline.predict_proba(X_test)[:, 1]

print(classification_report(y_test, y_pred))
print(f'ROC-AUC: {roc_auc_score(y_test, y_prob):.4f}')

# Cross-validation for more robust estimate
cv_scores = cross_val_score(pipeline, X, y, cv=5, scoring='roc_auc')
print(f'CV ROC-AUC: {cv_scores.mean():.4f} ± {cv_scores.std():.4f}')

# Feature importance
feature_names = ['days_since_purchase', 'purchase_frequency', 'total_orders',
                 'avg_order_value', 'support_ticket_count', 'email_open_rate']
importances = pipeline.named_steps['classifier'].feature_importances_
for name, importance in sorted(zip(feature_names, importances), key=lambda x: -x[1]):
    print(f'  {name}: {importance:.3f}')

# Export
joblib.dump(pipeline, 'models/churn_model_v1.joblib')
print('Model saved to models/churn_model_v1.joblib')

Serving Predictions

// api/routes/predictions.ts
import joblib from 'joblib'; // Python-style via python-shell or dedicated service
import { z } from 'zod';

// Better: call Python prediction service via HTTP
const churnPredictionSchema = z.object({
  customerId: z.string(),
  daysSincePurchase: z.number(),
  totalOrders: z.number(),
  avgOrderValue: z.number(),
  supportTicketCount: z.number(),
  emailOpenRate: z.number(),
});

app.post('/api/predictions/churn', async (req, res) => {
  const data = churnPredictionSchema.parse(req.body);

  // Call the Python prediction microservice
  const predResponse = await fetch(`${process.env.ML_SERVICE_URL}/predict/churn`, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      features: [
        data.daysSincePurchase,
        data.totalOrders,
        data.avgOrderValue,
        data.supportTicketCount,
        data.emailOpenRate,
      ],
    }),
  });

  const { probability, label } = await predResponse.json();

  // Log prediction for monitoring (track model performance over time)
  await db('predictions').insert({
    customer_id: data.customerId,
    model: 'churn_v1',
    prediction: label,
    confidence: probability,
    predicted_at: new Date(),
  });

  res.json({ churnRisk: probability, willChurn: label === 1 });
});

For integrating LLM capabilities into existing RAG pipelines and knowledge bases, see the LLM integrations guide. For semantic search at scale with Elasticsearch or Typesense, see the search guide. The Claude Skills 360 bundle includes ML integration skill sets for embeddings, model serving, and prediction APIs. Start with the free tier to try ML pipeline generation.

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free