Evidently monitors ML models and data quality in production. pip install evidently. Report(metrics=[DataDriftPreset(), DataQualityPreset(), TargetDriftPreset()]). report.run(reference_data=train_df, current_data=prod_df, column_mapping=ColumnMapping(target="churned", prediction="score", numerical_features=[...])). report.save_html("report.html"). report.as_dict() for programmatic parsing. Tests: suite = TestSuite(tests=[TestNumberOfColumnsWithMissingValues(lt=2), TestShareOfDrifted Features(lt=0.3), TestColumnDrift(column_name="amount_usd"), TestValueRange(column_name="amount_usd", gte=0.0)]). suite.run(...). suite.as_dict()["summary"]["all_passed"]. Individual metrics: DatasetDriftMetric(), ColumnDriftMetric(column_name="age"), DatasetMissingValuesMetric(), ColumnSummaryMetric(column_name="amount_usd"), ClassificationQualityMetric(), RegressionQualityMetric(). ColumnMapping: ColumnMapping(target=None, numerical_features=[], categorical_features=[], datetime_features=[], prediction="prediction"). Snapshots for continuous monitoring: Snapshot.from_report(report, timestamp=datetime.now()). WorkspaceView stores snapshots and renders monitoring panels. workspace = Workspace.create("./monitoring_workspace"), workspace.add_snapshot("churn_model", snapshot). ws.search_snapshots(project_id, timestamp_start, timestamp_end). MLflow integration: log_metric("drift_share", report.as_dict()["metrics"][0]["result"]["share_of_drifted_columns"]). Grafana: JSON metrics export from report dict to push to Prometheus pushgateway. Claude Code generates Evidently reports, test suites, monitoring snapshots, column mappings, and CI/CD drift gates.
CLAUDE.md for Evidently
## Evidently Stack
- Version: evidently >= 0.4
- Report: Report(metrics=[DataDriftPreset(), DataQualityPreset()]) → run(reference, current, column_mapping)
- Tests: TestSuite(tests=[TestColumnDrift(), TestShareOfDriftedFeatures(lt=0.3)])
- Column mapping: ColumnMapping(target, prediction, numerical_features, categorical_features)
- Save: report.save_html("report.html") or report.as_dict() for JSON
- Monitoring: workspace.add_snapshot(project_name, Snapshot.from_report(report, timestamp))
- CI gate: suite.as_dict()["summary"]["all_passed"] → sys.exit(1 if failed)
Reports and Test Suites
# monitoring/evidently_monitor.py — complete Evidently monitoring setup
from __future__ import annotations
import json
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
import pandas as pd
from evidently import ColumnMapping
from evidently.metrics import (
ColumnDriftMetric,
ColumnSummaryMetric,
DatasetDriftMetric,
DatasetMissingValuesMetric,
ClassificationQualityMetric,
ClassificationClassBalance,
)
from evidently.metric_preset import (
DataDriftPreset,
DataQualityPreset,
TargetDriftPreset,
ClassificationPreset,
)
from evidently.report import Report
from evidently.test_preset import (
DataDriftTestPreset,
DataQualityTestPreset,
DataStabilityTestPreset,
)
from evidently.tests import (
TestColumnDrift,
TestShareOfDriftedColumns,
TestNumberOfMissingValues,
TestNumberOfDuplicatedRows,
TestValueRange,
TestColumnValueMean,
TestHighlyCorrelatedColumns,
)
from evidently.test_suite import TestSuite
FEATURE_COLS = ["age", "tenure_days", "monthly_spend", "support_tickets", "last_login_days"]
CATEGORICAL_COLS = ["status", "plan"]
TARGET_COL = "churned"
PREDICTION_COL = "churn_score"
def build_column_mapping(with_prediction: bool = True) -> ColumnMapping:
return ColumnMapping(
target=TARGET_COL if with_prediction else None,
prediction=PREDICTION_COL if with_prediction else None,
numerical_features=FEATURE_COLS,
categorical_features=CATEGORICAL_COLS,
datetime_features=[],
)
# ── Data Quality Report ───────────────────────────────────────────────────────
def run_data_quality_report(
reference_data: pd.DataFrame,
current_data: pd.DataFrame,
output_dir: str = "reports",
) -> dict:
"""Generate data quality and drift report."""
Path(output_dir).mkdir(exist_ok=True)
mapping = build_column_mapping(with_prediction=False)
report = Report(metrics=[
DataQualityPreset(),
DataDriftPreset(num_stattest="ks", cat_stattest="psi", stattest_threshold=0.05),
DatasetDriftMetric(),
DatasetMissingValuesMetric(),
# Per-column summaries for key features
*[ColumnSummaryMetric(column_name=col) for col in FEATURE_COLS[:3]],
*[ColumnDriftMetric(column_name=col) for col in FEATURE_COLS],
])
report.run(
reference_data=reference_data,
current_data=current_data,
column_mapping=mapping,
)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
html_path = f"{output_dir}/data_quality_{ts}.html"
json_path = f"{output_dir}/data_quality_{ts}.json"
report.save_html(html_path)
result = report.as_dict()
with open(json_path, "w") as f:
json.dump(result, f, indent=2, default=str)
print(f"Report saved: {html_path}")
return result
# ── Model Performance Report ──────────────────────────────────────────────────
def run_model_report(
reference_data: pd.DataFrame,
current_data: pd.DataFrame,
output_dir: str = "reports",
) -> dict:
"""Generate model performance and target drift report."""
Path(output_dir).mkdir(exist_ok=True)
mapping = build_column_mapping(with_prediction=True)
report = Report(metrics=[
ClassificationPreset(),
TargetDriftPreset(),
ClassificationQualityMetric(),
ClassificationClassBalance(),
])
report.run(
reference_data=reference_data,
current_data=current_data,
column_mapping=mapping,
)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
html_path = f"{output_dir}/model_report_{ts}.html"
report.save_html(html_path)
print(f"Model report saved: {html_path}")
return report.as_dict()
# ── Test Suite (CI/CD gate) ───────────────────────────────────────────────────
def run_test_suite(
reference_data: pd.DataFrame,
current_data: pd.DataFrame,
drift_threshold: float = 0.3,
) -> tuple[bool, dict]:
"""Run test suite — returns (passed, results_dict)."""
mapping = build_column_mapping(with_prediction=False)
suite = TestSuite(tests=[
# Stability
DataStabilityTestPreset(),
# Drift
TestShareOfDriftedColumns(lt=drift_threshold),
*[TestColumnDrift(column_name=col) for col in FEATURE_COLS],
# Completeness
TestNumberOfMissingValues(lt=100),
TestNumberOfDuplicatedRows(lt=0.01),
# Range checks
TestValueRange(column_name="age", gte=18.0, lte=120.0),
TestValueRange(column_name="monthly_spend", gte=0.0, lte=50000.0),
TestValueRange(column_name="tenure_days", gte=0.0),
# Distribution checks
TestColumnValueMean(column_name="monthly_spend", gte=20.0, lte=2000.0),
])
suite.run(
reference_data=reference_data,
current_data=current_data,
column_mapping=mapping,
)
result = suite.as_dict()
passed = result["summary"]["all_passed"]
n_fail = result["summary"]["failed_tests"]
n_total = result["summary"]["total_tests"]
status = "PASSED" if passed else "FAILED"
print(f"Test suite {status}: {n_total - n_fail}/{n_total} tests passed")
if not passed:
for test in result["tests"]:
if test["status"] == "FAIL":
print(f" FAIL: {test['name']} — {test.get('description', '')}")
return passed, result
# ── Production monitoring with snapshots ─────────────────────────────────────
def record_monitoring_snapshot(
reference_data: pd.DataFrame,
current_data: pd.DataFrame,
workspace_path: str = "./monitoring_workspace",
project_name: str = "churn-model",
) -> None:
"""Store a timestamped snapshot to the ZenML/Evidently workspace."""
from evidently.ui.workspace import Workspace
from evidently.ui.base import Snapshot
workspace = Workspace.create(workspace_path)
report = Report(metrics=[
DataDriftPreset(),
ClassificationPreset(),
DataQualityPreset(),
])
report.run(
reference_data=reference_data,
current_data=current_data,
column_mapping=build_column_mapping(with_prediction=True),
)
snapshot = Snapshot.from_report(report, timestamp=datetime.now())
workspace.add_snapshot(project_name, snapshot)
print(f"Snapshot recorded for project '{project_name}'")
# ── Prometheus metrics export ─────────────────────────────────────────────────
def export_prometheus_metrics(report_dict: dict, model_name: str = "churn") -> str:
"""Format Evidently results as Prometheus text exposition."""
lines = []
metrics_section = report_dict.get("metrics", [])
for metric in metrics_section:
result = metric.get("result", {})
mtype = metric.get("metric", "unknown").lower().replace(".", "_")
if "drift_detected" in result:
val = 1 if result["drift_detected"] else 0
lines.append(f'evidently_drift_detected{{model="{model_name}",metric="{mtype}"}} {val}')
if "share_of_drifted_columns" in result:
val = result["share_of_drifted_columns"]
lines.append(f'evidently_drift_share{{model="{model_name}"}} {val}')
if "current" in result and "missing_values_share" in (result.get("current") or {}):
val = result["current"]["missing_values_share"]
lines.append(f'evidently_missing_values_share{{model="{model_name}"}} {val}')
return "\n".join(lines)
# ── CLI entry point ───────────────────────────────────────────────────────────
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--reference", required=True)
parser.add_argument("--current", required=True)
parser.add_argument("--mode", choices=["report", "test", "snapshot"], default="test")
args = parser.parse_args()
ref = pd.read_csv(args.reference)
cur = pd.read_csv(args.current)
if args.mode == "report":
run_data_quality_report(ref, cur)
elif args.mode == "test":
passed, _ = run_test_suite(ref, cur)
sys.exit(0 if passed else 1)
elif args.mode == "snapshot":
record_monitoring_snapshot(ref, cur)
GitHub Actions Integration
# .github/workflows/model-monitoring.yml — daily drift detection
name: Model Drift Monitor
on:
schedule:
- cron: "0 8 * * *" # daily at 8am UTC
workflow_dispatch:
jobs:
drift-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: "3.11" }
- run: pip install evidently pandas boto3
- name: Download reference + current data from S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp s3://my-bucket/reference/train.csv reference.csv
aws s3 cp s3://my-bucket/production/scoring_$(date +%Y%m%d).csv current.csv
- name: Run Evidently test suite
run: |
python monitoring/evidently_monitor.py \
--reference reference.csv \
--current current.csv \
--mode test
- name: Alert on drift
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `Data drift detected — ${new Date().toISOString().slice(0,10)}`,
body: "Evidently test suite failed. Check the monitoring dashboard.",
labels: ["ml-monitoring", "data-drift"],
})
For the NannyML alternative when needing confidence-based performance estimation on unlabeled production data using CBPE and DLE methods that track model performance even when ground truth labels are delayed — NannyML’s unique estimation approach is superior when labels arrive days or weeks later while Evidently requires actual labels for performance monitoring but provides richer data quality and drift visualization with its Report and TestSuite API. For the Arize Phoenix alternative when needing a unified observability platform covering LLM traces, embeddings, and traditional ML model drift in a single UI with vector similarity search over traces — Phoenix excels for LLM-heavy workloads while Evidently is battle-tested for classical ML monitoring with tabular data drift detection, rich HTML reports, and the Workspace snapshot API for team-visible production dashboards. The Claude Skills 360 bundle includes Evidently skill sets covering data quality reports, drift test suites, monitoring snapshots, Prometheus export, and GitHub Actions drift gates. Start with the free tier to try ML monitoring generation.