Claude Code for OpenVINO: Intel AI Inference Optimization — Claude Skills 360 Blog
Blog / AI / Claude Code for OpenVINO: Intel AI Inference Optimization
AI

Claude Code for OpenVINO: Intel AI Inference Optimization

Published: October 16, 2027
Read time: 5 min read
By: Claude Skills 360

OpenVINO optimizes AI inference on Intel hardware. pip install openvino nncf optimum[openvino]. Load: import openvino as ov, core = ov.Core(). Convert PyTorch: import openvino.torch, ov_model = ov.convert_model(pt_model, example_input=torch.randn(1,3,224,224)). Convert ONNX: ov_model = core.read_model("model.onnx"). Compile for CPU: compiled = core.compile_model(ov_model, "CPU"). Infer: result = compiled(inputs), output = result[compiled.output(0)]. Devices: core.available_devices — CPU, GPU, NPU, AUTO. Performance hints: core.compile_model(model, "CPU", {"PERFORMANCE_HINT":"THROUGHPUT"}) for batch, "LATENCY" for realtime. Cache: core.set_property({"CACHE_DIR":"./ov_cache"}) avoids recompile. INT8 quantization: from nncf import quantize, quantized = quantize(ov_model, calibration_dataset). Async: infer_queue = ov.AsyncInferQueue(compiled, 4), infer_queue.set_callback(lambda request, userdata: ...), infer_queue.start_async(inputs), infer_queue.wait_all(). Optimum: from optimum.intel import OVModelForCausalLM, OVModelForSequenceClassification, model = OVModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", export=True). GenAI: import openvino_genai, pipe = openvino_genai.LLMPipeline("./llama-ov", "CPU"), result = pipe.generate("What is OpenVINO?", max_new_tokens=256). Tokenizer: pipe.get_tokenizer(). Export OV model: ov.save_model(ov_model, "model.xml"). Claude Code generates OpenVINO conversion scripts, INT8 quantization pipelines, async inference queues, LLM GenAI pipelines, and optimum-intel optimization code.

CLAUDE.md for OpenVINO

## OpenVINO Stack
- Version: openvino >= 2024.4, nncf >= 2.13, optimum-intel >= 1.20
- Core: ov.Core().compile_model(model, device, config)
- Convert: ov.convert_model(pt_model, example_input=...) | core.read_model("model.onnx")
- Devices: "CPU" | "GPU" | "NPU" | "AUTO" | "MULTI:CPU,GPU"
- Hints: {"PERFORMANCE_HINT": "THROUGHPUT" | "LATENCY" | "CUMULATIVE_THROUGHPUT"}
- INT8: nncf.quantize(ov_model, calibration_dataset) → quantized model
- Async: ov.AsyncInferQueue(compiled, n_jobs) → start_async → wait_all
- LLM: openvino_genai.LLMPipeline(model_dir, "CPU") → pipe.generate(prompt, max_new_tokens)
- Optimum: OVModelForCausalLM.from_pretrained(hf_id, export=True) for HF models

OpenVINO Inference Pipeline

# inference/openvino_pipeline.py — Intel-optimized AI inference with OpenVINO
from __future__ import annotations
import os
import time
from pathlib import Path
from typing import Any, Callable

import numpy as np
import openvino as ov


# ── 1. Core setup ─────────────────────────────────────────────────────────────

def setup_core(cache_dir: str = "./ov_cache") -> ov.Core:
    """Initialize OpenVINO Core with model caching."""
    core = ov.Core()
    Path(cache_dir).mkdir(exist_ok=True)
    core.set_property({"CACHE_DIR": cache_dir})

    print("Available devices:")
    for device in core.available_devices:
        props = core.get_property(device, "FULL_DEVICE_NAME")
        print(f"  {device}: {props}")
    return core


# ── 2. Model conversion ───────────────────────────────────────────────────────

def convert_pytorch_model(
    pt_model,
    example_input,
    output_path: str = "model.xml",
    verbose:     bool = False,
) -> ov.Model:
    """Convert a PyTorch model to OpenVINO IR format."""
    ov_model = ov.convert_model(pt_model, example_input=example_input)

    if output_path:
        ov.save_model(ov_model, output_path)
        print(f"Saved: {output_path}")
    return ov_model


def convert_torchvision_classifier(
    model_name:  str  = "resnet50",
    output_path: str  = "./resnet50.xml",
) -> ov.Model:
    """Convert a torchvision model to OpenVINO."""
    import torch
    import torchvision

    pt_model = getattr(torchvision.models, model_name)(weights="DEFAULT")
    pt_model.eval()

    dummy = torch.randn(1, 3, 224, 224)
    ov_model = ov.convert_model(pt_model, example_input=dummy)
    ov.save_model(ov_model, output_path)
    print(f"Converted {model_name}{output_path}")
    return ov_model


def load_onnx_model(onnx_path: str, core: ov.Core) -> ov.Model:
    """Load an ONNX model into OpenVINO."""
    model = core.read_model(onnx_path)
    print(f"Loaded ONNX: {onnx_path}")
    print(f"  Inputs:  {[i.get_any_name() for i in model.inputs]}")
    print(f"  Outputs: {[o.get_any_name() for o in model.outputs]}")
    return model


# ── 3. Compilation and device targeting ──────────────────────────────────────

def compile_for_throughput(
    core:   ov.Core,
    model:  ov.Model,
    device: str = "CPU",
) -> ov.CompiledModel:
    """
    Compile for maximum throughput — best for batch processing.
    AUTO device selects best available hardware automatically.
    """
    config = {
        "PERFORMANCE_HINT": "THROUGHPUT",
        "NUM_STREAMS":       "AUTO",
    }
    compiled = core.compile_model(model, device, config)
    print(f"Compiled for {device} (THROUGHPUT mode)")
    return compiled


def compile_for_latency(
    core:   ov.Core,
    model:  ov.Model,
    device: str = "CPU",
) -> ov.CompiledModel:
    """Compile for minimum latency — best for single-sample real-time inference."""
    config = {
        "PERFORMANCE_HINT": "LATENCY",
        "INFERENCE_NUM_THREADS": str(os.cpu_count() or 4),
    }
    return core.compile_model(model, device, config)


def compile_multi_device(
    core:  ov.Core,
    model: ov.Model,
) -> ov.CompiledModel:
    """Use AUTO plugin to automatically select CPU/GPU/NPU."""
    return core.compile_model(model, "AUTO", {
        "PERFORMANCE_HINT": "CUMULATIVE_THROUGHPUT",
    })


# ── 4. Synchronous inference ──────────────────────────────────────────────────

def classify_image(
    compiled: ov.CompiledModel,
    image_array: np.ndarray,   # (H, W, C) uint8
    top_k: int = 5,
) -> list[tuple[int, float]]:
    """
    Run image classification inference.
    Returns list of (class_index, score) tuples.
    """
    # Preprocess: resize, normalize, add batch dim
    import cv2
    img = cv2.resize(image_array, (224, 224)).astype(np.float32) / 255.0
    img = (img - np.array([0.485, 0.456, 0.406])) / np.array([0.229, 0.224, 0.225])
    img = np.transpose(img, (2, 0, 1))[np.newaxis]   # NCHW

    result    = compiled({compiled.input(0): img})
    logits    = result[compiled.output(0)][0]
    scores    = np.exp(logits) / np.sum(np.exp(logits))   # softmax
    top_ids   = np.argsort(scores)[::-1][:top_k]
    return [(int(i), float(scores[i])) for i in top_ids]


def run_object_detection(
    compiled:    ov.CompiledModel,
    image_array: np.ndarray,
    conf_thresh: float = 0.5,
) -> list[dict]:
    """Run object detection (SSD/YOLO-style output parsing)."""
    h, w = image_array.shape[:2]
    img  = cv2.resize(image_array, (640, 640)).astype(np.float32) / 255.0
    img  = np.transpose(img, (2, 0, 1))[np.newaxis]

    result = compiled({compiled.input(0): img})
    # Generic output — actual parsing depends on model architecture
    output = result[compiled.output(0)]
    detections = []
    # ... parse boxes, classes, scores from output ...
    return detections


def extract_embeddings(
    compiled:   ov.CompiledModel,
    texts:      list[str],
    tokenizer,
    max_length: int = 128,
) -> np.ndarray:
    """Extract sentence embeddings from a compiled encoder model."""
    encoded = tokenizer(
        texts,
        padding="max_length",
        max_length=max_length,
        truncation=True,
        return_tensors="np",
    )
    inputs = {
        "input_ids":      encoded["input_ids"].astype(np.int64),
        "attention_mask": encoded["attention_mask"].astype(np.int64),
    }
    if "token_type_ids" in encoded:
        inputs["token_type_ids"] = encoded["token_type_ids"].astype(np.int64)

    result  = compiled(inputs)
    # Mean-pool last hidden state
    hidden  = result[compiled.output(0)]   # (N, seq_len, hidden)
    mask    = encoded["attention_mask"][..., np.newaxis]
    pooled  = (hidden * mask).sum(axis=1) / mask.sum(axis=1)
    norms   = np.linalg.norm(pooled, axis=1, keepdims=True)
    return pooled / np.maximum(norms, 1e-9)


# ── 5. INT8 quantization with NNCF ───────────────────────────────────────────

def quantize_model_int8(
    ov_model:   ov.Model,
    dataset:    Any,   # Iterable of input dicts
    output_path: str = "model_int8.xml",
    subset_size: int = 300,
) -> ov.Model:
    """
    Post-training INT8 quantization with NNCF.
    dataset: yields dicts mapping input names to numpy arrays.
    """
    import nncf

    calibration_dataset = nncf.Dataset(dataset)
    quantized = nncf.quantize(
        ov_model,
        calibration_dataset,
        preset=nncf.QuantizationPreset.PERFORMANCE,
        subset_size=subset_size,
    )
    ov.save_model(quantized, output_path)
    print(f"INT8 model saved: {output_path}")

    # Compare sizes
    orig_size  = Path("model.xml").stat().st_size if Path("model.xml").exists() else 0
    quant_size = Path(output_path).stat().st_size
    if orig_size:
        print(f"Size reduction: {orig_size / quant_size:.1f}x")
    return quantized


# ── 6. Async inference queue ──────────────────────────────────────────────────

def batch_inference_async(
    compiled:   ov.CompiledModel,
    inputs:     list[dict],
    n_streams:  int = 4,
) -> list[np.ndarray]:
    """
    High-throughput async inference with OpenVINO AsyncInferQueue.
    Saturates all CPU cores with concurrent inference requests.
    """
    results: list[np.ndarray | None] = [None] * len(inputs)

    def on_complete(request: ov.InferRequest, userdata: int):
        results[userdata] = request.get_output_tensor(0).data.copy()

    infer_queue = ov.AsyncInferQueue(compiled, n_streams)
    infer_queue.set_callback(on_complete)

    for i, inp in enumerate(inputs):
        infer_queue.start_async(inp, userdata=i)

    infer_queue.wait_all()
    return results


# ── 7. LLM with OpenVINO GenAI ───────────────────────────────────────────────

def run_llm_genai(
    model_dir:  str   = "./Llama-3.2-1B-ov",
    prompt:     str   = "What is OpenVINO?",
    max_tokens: int   = 256,
    device:     str   = "CPU",
) -> str:
    """
    LLM inference with openvino_genai — simplest path for text generation.
    pip install openvino-genai
    """
    import openvino_genai as ov_genai

    pipe = ov_genai.LLMPipeline(model_dir, device)
    config = ov_genai.GenerationConfig()
    config.max_new_tokens = max_tokens
    config.do_sample      = False

    return pipe.generate(prompt, config)


def export_hf_model_for_openvino(
    hf_model_id: str = "meta-llama/Llama-3.2-1B",
    output_dir:  str = "./Llama-3.2-1B-ov",
    precision:   str = "INT4",   # INT4 | INT8 | FP16
):
    """
    Export a Hugging Face LLM to OpenVINO format via optimum-intel.
    pip install optimum[openvino]
    """
    from optimum.intel import OVModelForCausalLM
    from transformers import AutoTokenizer

    print(f"Exporting {hf_model_id}{output_dir} ({precision})...")
    model = OVModelForCausalLM.from_pretrained(
        hf_model_id,
        export=True,
        load_in_8bit=(precision == "INT8"),
        load_in_4bit=(precision == "INT4"),
    )
    model.save_pretrained(output_dir)
    tokenizer = AutoTokenizer.from_pretrained(hf_model_id)
    tokenizer.save_pretrained(output_dir)
    print(f"Model exported: {output_dir}")


# ── 8. Benchmark ──────────────────────────────────────────────────────────────

def benchmark_compiled_model(
    compiled:     ov.CompiledModel,
    sample_input: dict,
    n_warmup:     int = 5,
    n_runs:       int = 50,
) -> dict:
    """Benchmark latency and throughput of a compiled model."""
    # Warmup
    for _ in range(n_warmup):
        compiled(sample_input)

    # Benchmark
    latencies = []
    for _ in range(n_runs):
        t0 = time.perf_counter()
        compiled(sample_input)
        latencies.append((time.perf_counter() - t0) * 1000)   # ms

    stats = {
        "mean_ms":   np.mean(latencies),
        "p50_ms":    np.percentile(latencies, 50),
        "p95_ms":    np.percentile(latencies, 95),
        "p99_ms":    np.percentile(latencies, 99),
        "fps":       1000 / np.mean(latencies),
    }
    print(f"\n=== OpenVINO Latency Benchmark ({n_runs} runs) ===")
    for k, v in stats.items():
        print(f"  {k:<12}: {v:>8.2f}")
    return stats


if __name__ == "__main__":
    core = setup_core()

    # Convert torchvision ResNet50
    print("\nConverting ResNet50...")
    ov_model = convert_torchvision_classifier("resnet50", "./resnet50.xml")

    # Compile for throughput
    compiled = compile_for_throughput(core, ov_model, "CPU")

    # Benchmark with random input
    dummy_input = {compiled.input(0): np.random.randn(1, 3, 224, 224).astype(np.float32)}
    benchmark_compiled_model(compiled, dummy_input, n_runs=100)

    # AsyncInferQueue batch example
    batch_inputs = [dummy_input] * 20
    async_results = batch_inference_async(compiled, batch_inputs, n_streams=4)
    print(f"\nAsync inference: {len(async_results)} results")

For the ONNX Runtime alternative when needing cross-platform deployment across Windows, Linux, macOS, iOS, and Android with the broadest accelerator plugin ecosystem — ONNX Runtime covers the widest platform range while OpenVINO delivers consistently the highest throughput and lowest latency specifically on Intel CPUs, Intel Integrated GPUs, and Intel NPUs through device-specific kernel fusion and INT8 calibration that ONNX Runtime’s generic optimization cannot match on Intel silicon. For the TensorRT alternative when deploying on NVIDIA GPUs and needing FP8, dynamic shapes, and CUDA graph optimization — TensorRT is purpose-built for NVIDIA while OpenVINO’s MULTI:CPU,GPU device plugin and AUTO device selection enables transparent workload distribution across heterogeneous Intel hardware without code changes, and openvino-genai provides a single-call LLM interface for running quantized LLaMA and Qwen models on CPU-only servers. The Claude Skills 360 bundle includes OpenVINO skill sets covering model conversion, device targeting, INT8 NNCF quantization, async inference queues, LLM GenAI pipelines, optimum-intel HF model export, and throughput benchmarking. Start with the free tier to try Intel-optimized inference code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free