Claude Code for RAGAS: RAG Pipeline Evaluation — Claude Skills 360 Blog
Blog / AI / Claude Code for RAGAS: RAG Pipeline Evaluation
AI

Claude Code for RAGAS: RAG Pipeline Evaluation

Published: October 10, 2027
Read time: 5 min read
By: Claude Skills 360

RAGAS evaluates RAG pipeline quality with reference-free metrics. pip install ragas. from ragas import evaluate, EvaluationDataset. from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall, answer_correctness. Core metrics: faithfulness — measures if answer is grounded in retrieved context (LLM decomposes answer into claims and checks each against context). answer_relevancy — measures if answer is relevant to the question (embeds answer + reverse-engineered questions). context_precision — measures if retrieved context is ranked appropriately (useful chunks ranked higher). context_recall — measures if all gold answer sentences can be attributed to retrieved context (requires ground truth). answer_correctness — factual + semantic similarity vs ground truth answer. Dataset: from ragas.dataset_schema import SingleTurnSample, sample = SingleTurnSample(user_input="What is RAG?", response="RAG combines retrieval...", retrieved_contexts=["RAG paper text..."], reference="Retrieval-Augmented Generation..."). dataset = EvaluationDataset(samples=[sample1, sample2, ...]). Evaluate: from ragas.llms import LangchainLLMWrapper, from langchain_anthropic import ChatAnthropic, llm = LangchainLLMWrapper(ChatAnthropic(model="claude-sonnet-4-6")), result = evaluate(dataset=dataset, metrics=[faithfulness, answer_relevancy, context_precision], llm=llm). result.to_pandas() returns DataFrame. Testset generation: from ragas.testset import TestsetGenerator, generator = TestsetGenerator.from_langchain(llm, critic_llm, embeddings), testset = generator.generate_with_langchain_docs(documents, test_size=10). Async: await aevaluate(dataset, metrics=...). RunConfig(timeout=60, max_retries=3, max_wait=120) for API rate limits. Claude Code generates RAGAS evaluation pipelines, testset generators, metric configs, and result analysis scripts for RAG applications.

CLAUDE.md for RAGAS

## RAGAS Stack
- Version: ragas >= 0.2
- Core: from ragas.metrics import faithfulness, answer_relevancy, context_precision, context_recall
- Sample: SingleTurnSample(user_input, response, retrieved_contexts, reference?)
- Dataset: EvaluationDataset(samples=[...])
- Eval: evaluate(dataset, metrics=[...], llm=LangchainLLMWrapper(ChatAnthropic(...)))
- Testset: TestsetGenerator.from_langchain(llm, critic_llm, embeddings) → generate_with_langchain_docs
- Reference-free: faithfulness, answer_relevancy (no ground truth needed)
- Reference-required: context_recall, answer_correctness (need gold answers)

RAGAS Evaluation Pipeline

# evaluation/ragas_eval.py — comprehensive RAG evaluation with RAGAS
from __future__ import annotations
import os
from pathlib import Path

import pandas as pd
from datasets import Dataset


# ── 1. Prepare evaluation dataset ────────────────────────────────────────────

def build_eval_dataset_manual() -> "EvaluationDataset":
    """Build RAGAS evaluation dataset from manually curated QA pairs."""
    from ragas.dataset_schema import EvaluationDataset, SingleTurnSample

    samples = [
        SingleTurnSample(
            user_input="What is Retrieval-Augmented Generation?",
            response=(
                "Retrieval-Augmented Generation (RAG) is a technique that enhances "
                "LLM responses by first retrieving relevant documents from a knowledge "
                "base and then using them as context for generation."
            ),
            retrieved_contexts=[
                "RAG was introduced in the paper 'Retrieval-Augmented Generation for "
                "Knowledge-Intensive NLP Tasks'. It combines a retrieval component with "
                "a sequence-to-sequence model.",
                "The retrieval component uses dense vector search to find relevant "
                "documents from a large corpus.",
            ],
            reference=(
                "RAG is an AI framework that retrieves relevant information from a "
                "knowledge base before generating a response, improving factual accuracy."
            ),
        ),
        SingleTurnSample(
            user_input="What is the difference between sparse and dense retrieval?",
            response=(
                "Sparse retrieval uses keyword matching (like BM25) while dense retrieval "
                "uses neural embeddings to find semantically similar documents."
            ),
            retrieved_contexts=[
                "BM25 is a sparse retrieval method based on term frequency and inverse "
                "document frequency. It works by exact keyword matching.",
                "Dense retrieval uses bi-encoder models to embed queries and documents "
                "into the same vector space for semantic similarity search.",
            ],
            reference=(
                "Sparse retrieval matches exact terms; dense retrieval uses embedding "
                "similarity to find semantically related content."
            ),
        ),
        # Reference-free sample (no ground truth answer)
        SingleTurnSample(
            user_input="How does chunking affect RAG performance?",
            response="Smaller chunks improve precision but reduce context; larger chunks retain more context but may introduce noise.",
            retrieved_contexts=[
                "Document chunking strategies significantly impact RAG quality. "
                "Fixed-size chunks are simple but may split semantically related text. "
                "Semantic chunking preserves meaning boundaries."
            ],
        ),
    ]
    return EvaluationDataset(samples=samples)


# ── 2. Configure LLM evaluator ────────────────────────────────────────────────

def get_evaluator_llm(provider: str = "anthropic"):
    """Build LLM wrapper for RAGAS evaluation."""
    from ragas.llms import LangchainLLMWrapper
    from ragas.embeddings import LangchainEmbeddingsWrapper

    if provider == "anthropic":
        from langchain_anthropic import ChatAnthropic
        from langchain_openai import OpenAIEmbeddings   # RAGAS needs embeddings for some metrics
        llm        = LangchainLLMWrapper(ChatAnthropic(model="claude-sonnet-4-6", max_tokens=1024))
        embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
    else:
        from langchain_openai import ChatOpenAI, OpenAIEmbeddings
        llm        = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o-mini"))
        embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())

    return llm, embeddings


# ── 3. Run evaluation ──────────────────────────────────────────────────────────

def evaluate_rag_pipeline(
    dataset_or_path: "EvaluationDataset | str | None" = None,
    provider:        str = "anthropic",
    include_reference_metrics: bool = True,
) -> pd.DataFrame:
    """
    Run full RAGAS evaluation suite.
    Returns DataFrame with per-sample metric scores.
    """
    from ragas import evaluate, RunConfig
    from ragas.metrics import (
        faithfulness,
        answer_relevancy,
        context_precision,
        context_recall,
        answer_correctness,
        context_entity_recall,
    )

    # Build dataset
    if dataset_or_path is None:
        dataset = build_eval_dataset_manual()
    elif isinstance(dataset_or_path, str):
        dataset = load_eval_dataset_from_csv(dataset_or_path)
    else:
        dataset = dataset_or_path

    llm, embeddings = get_evaluator_llm(provider)

    # Core reference-free metrics (always run)
    metrics = [faithfulness, answer_relevancy, context_precision]

    # Add reference-requiring metrics if ground truth is available
    if include_reference_metrics:
        metrics += [context_recall, answer_correctness]

    run_config = RunConfig(
        timeout=120,
        max_retries=3,
        max_wait=180,
        max_workers=4,      # Parallel evaluation calls
    )

    print(f"Evaluating {len(dataset)} samples with {len(metrics)} metrics...")
    result = evaluate(
        dataset=dataset,
        metrics=metrics,
        llm=llm,
        embeddings=embeddings,
        run_config=run_config,
        show_progress=True,
    )

    scores_df = result.to_pandas()
    print("\n=== RAGAS Evaluation Results ===")
    mean_scores = scores_df[[m.name for m in metrics if m.name in scores_df.columns]].mean()
    for metric, score in mean_scores.items():
        print(f"  {metric:<25}: {score:.3f}")

    return scores_df


# ── 4. Load from CSV ──────────────────────────────────────────────────────────

def load_eval_dataset_from_csv(csv_path: str) -> "EvaluationDataset":
    """
    Load evaluation dataset from CSV.
    Expected columns: user_input, response, retrieved_contexts (pipe-separated), reference (optional)
    """
    from ragas.dataset_schema import EvaluationDataset, SingleTurnSample

    df      = pd.read_csv(csv_path)
    samples = []

    for _, row in df.iterrows():
        contexts = str(row.get("retrieved_contexts", "")).split("|") if row.get("retrieved_contexts") else []
        samples.append(SingleTurnSample(
            user_input=str(row["user_input"]),
            response=str(row["response"]),
            retrieved_contexts=contexts,
            reference=str(row["reference"]) if pd.notna(row.get("reference")) else None,
        ))
    return EvaluationDataset(samples=samples)


# ── 5. Testset generation ─────────────────────────────────────────────────────

def generate_testset_from_docs(
    doc_paths:    list[str],
    test_size:    int    = 20,
    output_path:  str    = "testset.csv",
    provider:     str    = "anthropic",
) -> pd.DataFrame:
    """
    Generate synthetic QA testset from local documents using RAGAS TestsetGenerator.
    Creates diverse question types: simple, reasoning, multi-context, conditioning.
    """
    from langchain_community.document_loaders import TextLoader, PyPDFLoader
    from langchain_text_splitters import RecursiveCharacterTextSplitter
    from ragas.testset import TestsetGenerator

    llm, embeddings = get_evaluator_llm(provider)

    # Load documents
    docs = []
    for path in doc_paths:
        if path.endswith(".pdf"):
            docs.extend(PyPDFLoader(path).load())
        else:
            docs.extend(TextLoader(path).load())

    splitter   = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
    doc_chunks = splitter.split_documents(docs)
    print(f"Loaded {len(doc_chunks)} document chunks from {len(doc_paths)} files")

    generator = TestsetGenerator.from_langchain(
        generator_llm=llm.langchain_llm,
        critic_llm=llm.langchain_llm,
        embeddings=embeddings.langchain_embeddings,
    )

    testset    = generator.generate_with_langchain_docs(doc_chunks, test_size=test_size)
    testset_df = testset.to_pandas()

    testset_df.to_csv(output_path, index=False)
    print(f"Testset saved: {output_path} ({len(testset_df)} samples)")
    print(f"Question types:\n{testset_df['evolution_type'].value_counts()}")
    return testset_df


# ── 6. Continuous evaluation loop ────────────────────────────────────────────

async def async_evaluate(
    dataset: "EvaluationDataset",
    provider: str = "anthropic",
) -> pd.DataFrame:
    """Async evaluation for faster throughput with concurrent API calls."""
    from ragas import aevaluate
    from ragas.metrics import faithfulness, answer_relevancy, context_precision

    llm, embeddings = get_evaluator_llm(provider)
    result = await aevaluate(
        dataset=dataset,
        metrics=[faithfulness, answer_relevancy, context_precision],
        llm=llm,
        embeddings=embeddings,
    )
    return result.to_pandas()


if __name__ == "__main__":
    scores_df = evaluate_rag_pipeline()
    print("\nSample-level scores:")
    print(scores_df[["user_input", "faithfulness", "answer_relevancy", "context_precision"]].to_string())
    scores_df.to_csv("rag_eval_results.csv", index=False)
    print("\nResults saved: rag_eval_results.csv")

For the Arize Phoenix alternative when needing combined tracing and evaluation in a single platform with a visual UI for span exploration — Phoenix handles the full observability stack while RAGAS focuses exclusively on evaluation metrics, making RAGAS the right choice when you already have distributed tracing (Langfuse, Datadog) and need best-in-class RAG-specific metrics. For the LangSmith evaluation alternative when already using LangChain Hub and wanting native integration with the LangChain tracing platform — LangSmith traces and evaluates LangChain pipelines natively while RAGAS provides the most widely-cited academic RAG metrics (faithfulness, answer relevancy, context precision/recall) that are specifically designed for retrieval-augmented generation quality measurement regardless of framework. The Claude Skills 360 bundle includes RAGAS skill sets covering dataset preparation, metric configuration, LLM judge setup, testset generation, async evaluation, and result analysis. Start with the free tier to try RAG evaluation code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free