Claude Code for LLaVA: Visual Language Models — Claude Skills 360 Blog
Blog / AI / Claude Code for LLaVA: Visual Language Models
AI

Claude Code for LLaVA: Visual Language Models

Published: October 28, 2027
Read time: 5 min read
By: Claude Skills 360

LLaVA enables visual question answering and image understanding with LLMs. pip install transformers accelerate. from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration. Load: processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf"), model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf", torch_dtype=torch.float16, device_map="auto"). Chat template: conversation = [{"role":"user","content":[{"type":"image","image":"path.jpg"},{"type":"text","text":"Describe this image."}]}]. prompt = processor.apply_chat_template(conversation, add_generation_prompt=True). Process: inputs = processor(images=PIL_image, text=prompt, return_tensors="pt").to(model.device). Generate: output = model.generate(**inputs, max_new_tokens=256). Decode: processor.decode(output[0], skip_special_tokens=True). Models: "llava-hf/llava-1.5-7b-hf" (7B fast), "llava-hf/llava-v1.6-mistral-7b-hf" (7B best quality), "llava-hf/llava-v1.6-34b-hf" (34B), "llava-hf/LLaVA-NeXT-Video-7B-hf" (video). Pipeline: from transformers import pipeline, pipe = pipeline("image-to-text", model="llava-hf/llava-1.5-7b-hf"), result = pipe(image, prompt="USER: <image>\nDescribe this.\nASSISTANT:"). Moondream: from transformers import AutoModelForCausalLM, AutoTokenizer, model = AutoModelForCausalLM.from_pretrained("vikhyatk/moondream2") — tiny 2B model, fast on CPU. model.answer_question(image, "What is in this image?", tokenizer). Claude Code generates LLaVA image QA pipelines, chart analyzers, document OCR workflows, and multi-turn visual chat applications.

CLAUDE.md for LLaVA

## LLaVA Stack
- Version: transformers >= 4.45
- Models: llava-hf/llava-1.5-7b-hf | llava-v1.6-mistral-7b-hf | LLaVA-NeXT-Video-7B-hf
- Processor: LlavaNextProcessor.from_pretrained(model_id)
- Model: LlavaNextForConditionalGeneration.from_pretrained(id, dtype=float16, device_map="auto")
- Prompt: processor.apply_chat_template([{role, content:[{type:image},{type:text}]}])
- Input: processor(images=PIL_image, text=prompt, return_tensors="pt")
- Generate: model.generate(**inputs, max_new_tokens=512)
- Lightweight: vikhyatk/moondream2 — 2B, fast CPU inference
- Pipeline: pipeline("image-to-text", model=...) for simple use cases

LLaVA Visual QA Pipeline

# vision/llava_pipeline.py — visual instruction following with LLaVA
from __future__ import annotations
import os
from pathlib import Path
from typing import Optional

import torch
from PIL import Image


# ── 1. Model loading ──────────────────────────────────────────────────────────

def load_llava(
    model_id:   str  = "llava-hf/llava-v1.6-mistral-7b-hf",
    device_map: str  = "auto",
    dtype:      str  = "float16",
    load_in_4bit: bool = False,
) -> tuple:
    """
    Load LLaVA-Next model and processor.

    Model choices:
    - llava-hf/llava-1.5-7b-hf     — 7B, LLaMA2 backbone, fastest
    - llava-hf/llava-v1.6-mistral-7b-hf — 7B, Mistral backbone, best quality
    - llava-hf/llava-v1.6-34b-hf   — 34B, highest quality (needs 2+ GPUs)
    - llava-hf/LLaVA-NeXT-Video-7B-hf — 7B with video support
    """
    from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration, BitsAndBytesConfig

    processor = LlavaNextProcessor.from_pretrained(model_id)

    load_kwargs = {
        "device_map":    device_map,
        "torch_dtype":   getattr(torch, dtype),
        "low_cpu_mem_usage": True,
    }

    if load_in_4bit:
        load_kwargs["quantization_config"] = BitsAndBytesConfig(
            load_in_4bit=True,
            bnb_4bit_compute_dtype=torch.float16,
        )

    model = LlavaNextForConditionalGeneration.from_pretrained(model_id, **load_kwargs)
    model.eval()
    print(f"LLaVA loaded: {model_id} ({dtype})")
    return model, processor


def load_moondream(device: str = "cpu") -> tuple:
    """
    Load Moondream2 — 2B VLM, fast on CPU.
    Ideal for edge inference, embedded systems.
    """
    from transformers import AutoModelForCausalLM, AutoTokenizer

    model_id  = "vikhyatk/moondream2"
    tokenizer = AutoTokenizer.from_pretrained(model_id, revision="2024-07-23")
    model     = AutoModelForCausalLM.from_pretrained(
        model_id,
        revision="2024-07-23",
        trust_remote_code=True,
        torch_dtype=torch.float32 if device == "cpu" else torch.float16,
    ).to(device).eval()

    print(f"Moondream2 loaded on {device}")
    return model, tokenizer


# ── 2. Image question answering ───────────────────────────────────────────────

def ask_llava(
    model,
    processor,
    image:      str | Image.Image,
    question:   str,
    max_tokens: int   = 256,
    temperature: float = 0.2,
    do_sample:  bool  = False,
) -> str:
    """
    Ask a question about an image.
    Returns the model's text response.
    """
    if isinstance(image, str):
        image = Image.open(image).convert("RGB")

    # Build conversation with chat template
    conversation = [{
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": question},
        ],
    }]
    prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)

    inputs = processor(
        images=image,
        text=prompt,
        return_tensors="pt",
    ).to(model.device)

    with torch.no_grad():
        output = model.generate(
            **inputs,
            max_new_tokens=max_tokens,
            temperature=temperature if do_sample else 1.0,
            do_sample=do_sample,
            pad_token_id=processor.tokenizer.eos_token_id,
        )

    # Decode only the generated part
    generated_ids = output[0][inputs["input_ids"].shape[1]:]
    return processor.decode(generated_ids, skip_special_tokens=True).strip()


def ask_moondream(
    model,
    tokenizer,
    image: str | Image.Image,
    question: str,
) -> str:
    """Ask Moondream a question about an image."""
    if isinstance(image, str):
        image = Image.open(image).convert("RGB")
    enc_image = model.encode_image(image)
    return model.answer_question(enc_image, question, tokenizer)


# ── 3. Multi-turn visual conversation ────────────────────────────────────────

class VisualChatSession:
    """
    Multi-turn conversation with a visual context.
    Maintains conversation history with image context.
    """

    def __init__(self, model, processor, image: str | Image.Image):
        self.model     = model
        self.processor = processor
        self.history   = []

        if isinstance(image, str):
            image = Image.open(image).convert("RGB")
        self.image = image

    def chat(self, message: str, max_tokens: int = 512) -> str:
        """Send a message and get a response."""
        # Add user message
        if not self.history:
            # First turn includes image
            content = [{"type": "image"}, {"type": "text", "text": message}]
        else:
            content = [{"type": "text", "text": message}]

        self.history.append({"role": "user", "content": content})

        prompt = self.processor.apply_chat_template(
            self.history, add_generation_prompt=True
        )
        inputs = self.processor(
            images=self.image,
            text=prompt,
            return_tensors="pt",
        ).to(self.model.device)

        with torch.no_grad():
            output = self.model.generate(
                **inputs,
                max_new_tokens=max_tokens,
                do_sample=False,
                pad_token_id=self.processor.tokenizer.eos_token_id,
            )

        generated_ids  = output[0][inputs["input_ids"].shape[1]:]
        response       = self.processor.decode(generated_ids, skip_special_tokens=True).strip()

        self.history.append({"role": "assistant", "content": [{"type": "text", "text": response}]})
        return response

    def reset(self):
        self.history = []


# ── 4. Specialized tasks ──────────────────────────────────────────────────────

def describe_image(model, processor, image_path: str, detail: str = "detailed") -> str:
    """Generate a detailed image description."""
    prompts = {
        "brief":    "Describe this image in one or two sentences.",
        "detailed": "Provide a detailed description of everything you can see in this image.",
        "list":     "List all the objects, people, and elements you can identify in this image.",
    }
    return ask_llava(model, processor, image_path, prompts.get(detail, prompts["detailed"]))


def read_text_in_image(model, processor, image_path: str) -> str:
    """Extract text from an image (OCR-like capability)."""
    return ask_llava(
        model, processor, image_path,
        "Please read and transcribe all the text visible in this image. "
        "Preserve the layout as much as possible.",
        max_tokens=512,
    )


def analyze_chart(model, processor, image_path: str) -> str:
    """Extract data and insights from charts and graphs."""
    return ask_llava(
        model, processor, image_path,
        "This is a chart or graph. Please: "
        "1. Identify the chart type. "
        "2. Describe what data it shows. "
        "3. List the key values and trends. "
        "4. Provide a one-sentence summary of the main insight.",
        max_tokens=512,
    )


def check_image_content(
    model,
    processor,
    image_path: str,
    criteria:   list[str],
) -> dict[str, bool]:
    """
    Check whether specific elements are present in an image.
    Returns {criterion: True/False} dict.
    """
    criteria_str = "\n".join(f"- {c}" for c in criteria)
    question = (
        "For each of the following criteria, answer only 'yes' or 'no' on a new line:\n"
        f"{criteria_str}\n\nOne answer per line:"
    )
    response = ask_llava(model, processor, image_path, question, max_tokens=100)
    lines    = [l.strip().lower() for l in response.strip().split("\n") if l.strip()]

    results = {}
    for i, criterion in enumerate(criteria):
        if i < len(lines):
            results[criterion] = "yes" in lines[i]
        else:
            results[criterion] = False
    return results


def batch_analyze(
    model,
    processor,
    image_paths: list[str],
    question:    str,
    max_tokens:  int = 256,
) -> list[str]:
    """Analyze multiple images with the same question."""
    responses = []
    for i, path in enumerate(image_paths):
        print(f"[{i+1}/{len(image_paths)}] {Path(path).name}...")
        response = ask_llava(model, processor, path, question, max_tokens=max_tokens)
        responses.append(response)
    return responses


# ── 5. Streaming generation ───────────────────────────────────────────────────

def ask_llava_stream(
    model,
    processor,
    image:     str | Image.Image,
    question:  str,
    max_tokens: int = 512,
):
    """Stream LLaVA response token by token."""
    from transformers import TextIteratorStreamer
    from threading import Thread

    if isinstance(image, str):
        image = Image.open(image).convert("RGB")

    conversation = [{"role": "user", "content": [
        {"type": "image"}, {"type": "text", "text": question}
    ]}]
    prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
    inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device)

    streamer = TextIteratorStreamer(
        processor.tokenizer,
        skip_prompt=True,
        skip_special_tokens=True,
    )

    def generate_fn():
        model.generate(**inputs, max_new_tokens=max_tokens, streamer=streamer)

    thread = Thread(target=generate_fn, daemon=True)
    thread.start()

    for token in streamer:
        yield token

    thread.join()


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("LLaVA Visual QA Demo")
    print("="*50)

    # Use Moondream for quick demo (2B, works on CPU)
    model, tokenizer = load_moondream(device="cpu")

    # Create a synthetic test image
    import numpy as np
    img_array = np.ones((256, 256, 3), dtype=np.uint8) * 200
    cv2 = None
    try:
        import cv2
        cv2.putText(img_array, "Hello World", (40, 130), 
                    cv2.FONT_HERSHEY_SIMPLEX, 1.5, (0, 0, 0), 3)
    except ImportError:
        pass

    image = Image.fromarray(img_array)

    # Ask questions
    questions = [
        "What do you see in this image?",
        "Is there any text in this image?",
        "Describe the colors present.",
    ]

    for q in questions:
        answer = ask_moondream(model, tokenizer, image, q)
        print(f"\nQ: {q}\nA: {answer}")

    print("\n" + "="*50)
    print("For full LLaVA-Next (7B+), load with load_llava().")
    print("Requires ~14GB VRAM for float16 or ~8GB with 4-bit quantization.")

For the GPT-4 Vision / Claude claude-sonnet-4-6 API alternative when needing production visual reasoning, complex document understanding, or multilingual scene analysis without hosting your own GPU server — cloud vision APIs handle infrastructure while LLaVA runs fully offline on your own hardware, enabling HIPAA-compliant medical image analysis, sensitive document processing, and industrial inspection applications where images cannot leave on-premises infrastructure. For the InternVL2 alternative when needing higher benchmark performance on document understanding, multi-image reasoning, and chart analysis than LLaVA at the same parameter count — InternVL2 achieves better OCR and reasoning scores while LLaVA’s larger open-source community, wider fine-tuning support via LLaVA-NeXT, and established training recipes make it the more accessible starting point for building custom visual instruction-following models on domain-specific datasets. The Claude Skills 360 bundle includes LLaVA skill sets covering model loading, image QA, multi-turn visual chat, OCR text extraction, chart analysis, streaming generation, batch processing, and Moondream lightweight inference. Start with the free tier to try visual language model code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free