Claude Code for DeepEval: LLM Unit Testing Framework — Claude Skills 360 Blog
Blog / AI / Claude Code for DeepEval: LLM Unit Testing Framework
AI

Claude Code for DeepEval: LLM Unit Testing Framework

Published: October 11, 2027
Read time: 5 min read
By: Claude Skills 360

DeepEval is a pytest-based LLM unit testing framework. pip install deepeval. from deepeval import assert_test, evaluate. from deepeval.test_case import LLMTestCase. from deepeval.metrics import AnswerRelevancyMetric, FaithfulnessMetric, ContextualPrecisionMetric, ContextualRecallMetric, HallucinationMetric. Test case: tc = LLMTestCase(input="What is RAG?", actual_output="RAG combines retrieval...", expected_output="RAG is...", retrieval_context=["RAG paper text..."]). Single metric: metric = AnswerRelevancyMetric(threshold=0.7, model="gpt-4o"), metric.measure(tc), metric.score, metric.reason. Assert: assert_test(tc, [AnswerRelevancyMetric(threshold=0.7)]). Batch: evaluate([tc1, tc2], [AnswerRelevancyMetric(), FaithfulnessMetric()]). pytest: @pytest.mark.parametrize("test_case", dataset) + def test_rag(test_case): assert_test(test_case, metrics). @deepeval.on_test_run_end hook for post-run actions. GEval custom: from deepeval.metrics import GEval, GEval(name="Correctness", evaluation_steps=["Check if response contains key facts", "Verify no contradictions"], evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT], threshold=0.7). Dataset: from deepeval.dataset import EvaluationDataset, dataset = EvaluationDataset(test_cases=[tc1, tc2]), dataset.evaluate([metric]). Load CSV: dataset.pull(alias="my-dataset") from Confident AI. Conversational: from deepeval.test_case import ConversationalTestCase, Turn, ConversationalTestCase(turns=[Turn(input="Hi", actual_output="Hello!")]). deepeval login to connect Confident AI cloud dashboard. deepeval test run test_llm.py CLI command. Claude Code generates DeepEval test suites, metric configs, custom GEval rubrics, dataset loaders, and pytest CI pipelines for LLM applications.

CLAUDE.md for DeepEval

## DeepEval Stack
- Version: deepeval >= 0.21
- Test: LLMTestCase(input, actual_output, expected_output?, retrieval_context?)
- Assert: assert_test(test_case, [metric1, metric2]) — raises if below threshold
- Batch: evaluate([tc1, tc2], metrics) → EvaluationResult with scores
- RAG metrics: AnswerRelevancyMetric | FaithfulnessMetric | ContextualPrecisionMetric | ContextualRecallMetric
- Hallucination: HallucinationMetric(threshold=0.5) — score=0 means no hallucination
- Custom: GEval(name, evaluation_steps=[...], evaluation_params=[...], threshold)
- pytest: @pytest.mark.parametrize("test_case", dataset) + assert_test inside test fn
- CI: deepeval test run test_file.py — exits non-zero on failures

DeepEval LLM Test Suite

# tests/test_llm_app.py — DeepEval unit tests for a RAG application
from __future__ import annotations
import os
import pytest

import deepeval
from deepeval import assert_test, evaluate
from deepeval.dataset import EvaluationDataset
from deepeval.metrics import (
    AnswerRelevancyMetric,
    FaithfulnessMetric,
    ContextualPrecisionMetric,
    ContextualRecallMetric,
    HallucinationMetric,
    SummarizationMetric,
)
from deepeval.metrics import GEval
from deepeval.metrics.base_metric import BaseMetric
from deepeval.test_case import LLMTestCase, LLMTestCaseParams, ConversationalTestCase, Turn


# ── 1. Test case builders ──────────────────────────────────────────────────────

def make_rag_test_case(
    question:    str,
    answer:      str,
    contexts:    list[str],
    expected:    str | None = None,
) -> LLMTestCase:
    """Build a RAG test case with retrieval context."""
    return LLMTestCase(
        input=question,
        actual_output=answer,
        expected_output=expected,
        retrieval_context=contexts,
    )


# Curated golden dataset
RAG_TEST_CASES = [
    make_rag_test_case(
        question="What is Retrieval-Augmented Generation?",
        answer=(
            "Retrieval-Augmented Generation (RAG) enhances LLM responses by first "
            "retrieving relevant documents from a knowledge base and incorporating "
            "them as context for the generation step."
        ),
        contexts=[
            "RAG was introduced in 'Retrieval-Augmented Generation for Knowledge-Intensive "
            "NLP Tasks'. It combines a dense retrieval component with a sequence-to-sequence model.",
            "The retrieval step uses dense vector search to find semantically similar documents.",
        ],
        expected="RAG combines retrieval with generation to produce grounded, factual responses.",
    ),
    make_rag_test_case(
        question="What is the difference between BM25 and dense retrieval?",
        answer=(
            "BM25 is a sparse retrieval method using keyword frequency and inverse document "
            "frequency. Dense retrieval uses neural embeddings to find semantically similar "
            "documents even without keyword overlap."
        ),
        contexts=[
            "BM25 ranks documents based on term frequency-inverse document frequency (TF-IDF).",
            "Dense retrieval models like DPR embed queries and passages into shared vector spaces.",
        ],
        expected="BM25 uses keyword matching; dense retrieval uses embedding similarity.",
    ),
    make_rag_test_case(
        question="How does chunking strategy affect RAG quality?",
        answer=(
            "Chunk size directly impacts retrieval precision and context completeness. "
            "Smaller chunks improve precision but may miss surrounding context; "
            "larger chunks retain more context but introduce noise. "
            "Semantic chunking preserves meaning boundaries."
        ),
        contexts=[
            "Chunking strategies include fixed-size, sentence-level, and semantic chunking. "
            "Chunk overlap helps preserve context across boundaries.",
        ],
    ),
    # Hallucination test — answer contains info NOT in context
    make_rag_test_case(
        question="What year was BERT released?",
        answer="BERT was released by Google in 2018 and revolutionized NLP benchmarks.",
        contexts=["Transformer models have significantly advanced natural language processing."],
    ),
]


# ── 2. Standard RAG metrics tests ─────────────────────────────────────────────

@pytest.mark.parametrize("test_case", RAG_TEST_CASES[:3])   # First 3 have valid contexts
def test_answer_relevancy(test_case: LLMTestCase):
    """Answer must be relevant to the question."""
    metric = AnswerRelevancyMetric(
        threshold=0.7,
        model="gpt-4o-mini",
        include_reason=True,
    )
    assert_test(test_case, [metric])


@pytest.mark.parametrize("test_case", RAG_TEST_CASES[:3])
def test_faithfulness(test_case: LLMTestCase):
    """Answer must be grounded in retrieved contexts — no hallucination."""
    metric = FaithfulnessMetric(
        threshold=0.7,
        model="gpt-4o-mini",
        include_reason=True,
    )
    assert_test(test_case, [metric])


@pytest.mark.parametrize("test_case", RAG_TEST_CASES[:2])   # Need expected_output
def test_contextual_precision(test_case: LLMTestCase):
    """Retrieved contexts should be ranked by relevance."""
    metric = ContextualPrecisionMetric(
        threshold=0.7,
        model="gpt-4o-mini",
    )
    assert_test(test_case, [metric])


@pytest.mark.parametrize("test_case", RAG_TEST_CASES[:2])
def test_contextual_recall(test_case: LLMTestCase):
    """Retrieved contexts should cover the expected answer."""
    metric = ContextualRecallMetric(
        threshold=0.6,
        model="gpt-4o-mini",
    )
    assert_test(test_case, [metric])


def test_hallucination_detection():
    """The hallucination test case should score poorly on faithfulness."""
    hallucination_case = RAG_TEST_CASES[3]   # BERT answer not in context
    metric = HallucinationMetric(
        threshold=0.5,    # score ABOVE threshold = too much hallucination
        model="gpt-4o-mini",
    )
    metric.measure(hallucination_case)
    print(f"Hallucination score: {metric.score:.2f}{metric.reason}")
    # Note: HallucinationMetric is inverted — higher score = more hallucination
    assert metric.score >= 0.0    # Just verify it runs; expected to flag hallucination


# ── 3. GEval custom metric ────────────────────────────────────────────────────

def test_response_conciseness():
    """Custom GEval metric: response should be concise and to-the-point."""
    conciseness_metric = GEval(
        name="Conciseness",
        evaluation_steps=[
            "Check if the response directly answers the question without unnecessary padding",
            "Verify the response is under 100 words",
            "Confirm there are no repeated points or verbose phrases",
        ],
        evaluation_params=[
            LLMTestCaseParams.INPUT,
            LLMTestCaseParams.ACTUAL_OUTPUT,
        ],
        threshold=0.6,
        model="gpt-4o-mini",
    )

    test_case = LLMTestCase(
        input="What is a vector database?",
        actual_output=(
            "A vector database stores high-dimensional embeddings and supports "
            "approximate nearest neighbor search for semantic similarity queries."
        ),
    )
    assert_test(test_case, [conciseness_metric])


def test_citation_quality():
    """Custom GEval: responses should reference retrieved context accurately."""
    citation_metric = GEval(
        name="CitationQuality",
        evaluation_steps=[
            "Check if claims in the response can be traced to the retrieval context",
            "Verify no facts are introduced that aren't present in the context",
            "Assess whether the response properly synthesizes context rather than copying verbatim",
        ],
        evaluation_params=[
            LLMTestCaseParams.INPUT,
            LLMTestCaseParams.ACTUAL_OUTPUT,
            LLMTestCaseParams.RETRIEVAL_CONTEXT,
        ],
        threshold=0.65,
        model="gpt-4o-mini",
    )

    assert_test(RAG_TEST_CASES[0], [citation_metric])


# ── 4. Custom BaseMetric ──────────────────────────────────────────────────────

class ResponseLengthMetric(BaseMetric):
    """Custom metric: response must be within a word count range."""

    def __init__(self, min_words: int = 10, max_words: int = 150, threshold: float = 1.0):
        self.min_words = min_words
        self.max_words = max_words
        self.threshold = threshold
        self.name      = f"ResponseLength({min_words}-{max_words} words)"

    def measure(self, test_case: LLMTestCase) -> float:
        words = len(test_case.actual_output.split())
        in_range = self.min_words <= words <= self.max_words
        self.score  = 1.0 if in_range else 0.0
        self.reason = (
            f"Response has {words} words. "
            f"{'Within' if in_range else 'Outside'} range [{self.min_words}, {self.max_words}]."
        )
        self.success = self.score >= self.threshold
        return self.score

    def is_successful(self) -> bool:
        return self.success


def test_response_length():
    """Response must be between 10 and 150 words."""
    metric    = ResponseLengthMetric(min_words=10, max_words=150)
    test_case = RAG_TEST_CASES[0]
    assert_test(test_case, [metric])


# ── 5. EvaluationDataset and batch evaluation ─────────────────────────────────

def test_full_rag_suite_batch():
    """Run all RAG metrics across the first 3 test cases in one batch call."""
    dataset = EvaluationDataset(test_cases=RAG_TEST_CASES[:3])

    results = evaluate(
        test_cases=dataset,
        metrics=[
            AnswerRelevancyMetric(threshold=0.7,  model="gpt-4o-mini"),
            FaithfulnessMetric(threshold=0.7,     model="gpt-4o-mini"),
        ],
        run_async=True,     # Parallel metric evaluation
        print_results=True,
    )

    # Check aggregate pass rate
    passed = sum(1 for r in results.test_results if r.success)
    total  = len(results.test_results)
    print(f"\nBatch results: {passed}/{total} test cases passed")
    assert passed / total >= 0.75, f"Pass rate {passed/total:.0%} below 75% threshold"


# ── 6. Conversational testing ─────────────────────────────────────────────────

def test_multi_turn_conversation():
    """Test a multi-turn conversation for coherence and relevancy."""
    conv_case = ConversationalTestCase(
        turns=[
            Turn(
                input="What is a transformer model?",
                actual_output=(
                    "A transformer is a neural network architecture using self-attention "
                    "mechanisms to process sequences in parallel, introduced in 'Attention is All You Need'."
                ),
                retrieval_context=["Transformers use multi-head self-attention and position encodings."],
            ),
            Turn(
                input="How does the attention mechanism work?",
                actual_output=(
                    "Attention computes weighted sums over values, where weights come from "
                    "query-key dot products scaled by the square root of dimension size, "
                    "then passed through softmax."
                ),
                retrieval_context=["Attention(Q,K,V) = softmax(QK^T / sqrt(d_k)) * V."],
            ),
        ]
    )
    # Conversational test cases use ConversationalRelevancyMetric
    from deepeval.metrics import ConversationalRelevancyMetric
    metric = ConversationalRelevancyMetric(threshold=0.7, model="gpt-4o-mini")
    assert_test(conv_case, [metric])


# ── 7. Summarization metric ───────────────────────────────────────────────────

def test_summarization_quality():
    """Test that a document summary covers key points faithfully."""
    source_doc = (
        "BERT (Bidirectional Encoder Representations from Transformers) was developed by Google "
        "and published in 2018. It introduced the concept of pre-training deep bidirectional "
        "transformers on large text corpora using masked language modeling and next sentence "
        "prediction. BERT achieved state-of-the-art results on 11 NLP benchmarks at the time "
        "of release, including GLUE, SQuAD, and NER tasks."
    )
    summary = (
        "BERT, released by Google in 2018, is a bidirectional transformer pre-trained on "
        "masked language modeling. It set new records on multiple NLP benchmarks."
    )

    test_case = LLMTestCase(
        input=source_doc,
        actual_output=summary,
    )
    metric = SummarizationMetric(threshold=0.7, model="gpt-4o-mini")
    assert_test(test_case, [metric])


# ── Post-run hook ──────────────────────────────────────────────────────────────

@deepeval.on_test_run_end
def post_run_callback():
    """Called after all tests complete — log or notify CI."""
    print("\nDeepEval test run complete. Results logged to Confident AI.")

For the RAGAS alternative when needing reference-free RAG metrics and testset generation from documents — RAGAS provides faithfulness and answer relevancy without ground truth labels while DeepEval’s pytest integration, GEval custom rubrics, and assert_test make it the stronger choice for CI/CD pipelines where LLM quality is a deployment gate. For the LangSmith evaluation alternative when already using LangChain and wanting native trace-linked evaluation — LangSmith ties evaluations directly to LangChain traces while DeepEval’s provider-agnostic LLMTestCase API works with any LLM backend and gives explicit pass/fail thresholds that block deployments exactly like unit tests block code merges. The Claude Skills 360 bundle includes DeepEval skill sets covering RAG test cases, pytest integration, GEval custom metrics, custom BaseMetric, EvaluationDataset batch evaluation, conversational testing, and CI/CD pipeline configuration. Start with the free tier to try LLM unit test generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free