Axiom provides serverless log analytics and observability — new Axiom({ token: apiKey }) initializes the client. axiom.ingest("dataset", [{ ...fields }]) sends structured log events; axiom.flush() batches and drains the queue. Query with APL (Axiom Processing Language): axiom.query("['my-dataset'] | where status >= 400 | summarize count() by bin_auto(timestamp), route") returns time-series results. APL is Kusto-inspired: | where, | project, | summarize, | extend, | top N by, | order by, | join, | union. axiom.queryLegacy(dataset, { apl: query }) for the datasets API. OpenTelemetry: new OTLPTraceExporter({ url: "https://api.axiom.co/v1/traces", headers: { Authorization: "Bearer TOKEN", "X-Axiom-Dataset": "my-traces" } }) exports spans. @axiomhq/winston and @axiomhq/pino are logging transport plugins. Vercel integration: add Axiom as a Log Drain in the Vercel dashboard — all function logs flow to Axiom automatically. axiom.datasets.create(name) and axiom.datasets.list() manage datasets. Claude Code generates Axiom structured logging, APL analytics queries, and OpenTelemetry tracing pipelines.
CLAUDE.md for Axiom
## Axiom Stack
- Version: @axiomhq/js >= 1.x
- Init: const axiom = new Axiom({ token: process.env.AXIOM_TOKEN! })
- Ingest: axiom.ingest("my-dataset", [{ timestamp: new Date().toISOString(), level: "info", message, ...context }])
- Flush: await axiom.flush() — required at Worker/Edge function end (no long-lived process)
- APL query: const result = await axiom.query("['my-dataset'] | where level == 'error' | order by timestamp desc | limit 100")
- Result rows: result.matches — [{ _time, data: Record<string, unknown> }]
- Winston transport: createTransport({ token, dataset }) from @axiomhq/winston
Axiom Client
// lib/axiom/client.ts — structured logging and analytics with Axiom
import { Axiom } from "@axiomhq/js"
export const axiom = new Axiom({
token: process.env.AXIOM_TOKEN!,
...(process.env.AXIOM_ORG_ID ? { orgId: process.env.AXIOM_ORG_ID } : {}),
})
const DATASET = process.env.AXIOM_DATASET ?? "app-logs"
export type LogLevel = "debug" | "info" | "warn" | "error"
export type LogContext = {
traceId?: string
spanId?: string
userId?: string
sessionId?: string
route?: string
method?: string
statusCode?: number
durationMs?: number
[key: string]: unknown
}
/** Structured logger — always includes timestamp and level */
export const logger = {
debug: (message: string, ctx: LogContext = {}) =>
log("debug", message, ctx),
info: (message: string, ctx: LogContext = {}) =>
log("info", message, ctx),
warn: (message: string, ctx: LogContext = {}) =>
log("warn", message, ctx),
error: (message: string, err?: Error | unknown, ctx: LogContext = {}) => {
const errCtx = err instanceof Error
? { errorName: err.name, errorMessage: err.message, errorStack: err.stack }
: { error: String(err) }
return log("error", message, { ...errCtx, ...ctx })
},
flush: () => axiom.flush(),
}
function log(level: LogLevel, message: string, ctx: LogContext) {
const event = {
timestamp: new Date().toISOString(),
level,
message,
service: process.env.SERVICE_NAME ?? "app",
env: process.env.NODE_ENV ?? "development",
...ctx,
}
axiom.ingest(DATASET, [event])
// Mirror to console in dev
if (process.env.NODE_ENV !== "production") {
const fn = level === "error" ? console.error : level === "warn" ? console.warn : console.log
fn(`[${level.toUpperCase()}] ${message}`, ctx)
}
}
// ── APL query helpers ──────────────────────────────────────────────────────
export type QueryResult = {
rows: Record<string, unknown>[]
elapsed: number
}
export async function apl(query: string): Promise<QueryResult> {
const start = Date.now()
const result = await axiom.query(query)
return {
rows: (result.matches ?? []).map((m) => ({ _time: m._time, ...m.data })),
elapsed: Date.now() - start,
}
}
export async function getErrorRate(
route: string,
hours = 24,
): Promise<Array<{ bin: string; errors: number; total: number; errorRate: number }>> {
const { rows } = await apl(`
['${DATASET}']
| where timestamp > ago(${hours}h)
| where route == "${route}"
| summarize
errors = countif(level == "error"),
total = count()
by bin_auto(timestamp)
| extend errorRate = todouble(errors) / todouble(total) * 100
| order by timestamp asc
`)
return rows as any
}
export async function getSlowRoutes(
p99Ms = 1000,
hours = 24,
): Promise<Array<{ route: string; p99: number; p95: number; count: number }>> {
const { rows } = await apl(`
['${DATASET}']
| where timestamp > ago(${hours}h)
| where isnotnull(durationMs)
| summarize
p99 = percentile(durationMs, 99),
p95 = percentile(durationMs, 95),
count = count()
by route
| where p99 > ${p99Ms}
| order by p99 desc
| limit 20
`)
return rows as any
}
export async function recentErrors(limit = 50): Promise<Array<Record<string, unknown>>> {
const { rows } = await apl(`
['${DATASET}']
| where timestamp > ago(1h)
| where level == "error"
| order by timestamp desc
| limit ${limit}
`)
return rows
}
Next.js Request Logging Middleware
// middleware.ts — instrument every request with Axiom
import { NextResponse } from "next/server"
import type { NextRequest } from "next/server"
import { logger } from "@/lib/axiom/client"
import { randomUUID } from "crypto"
export async function middleware(req: NextRequest) {
const traceId = randomUUID()
const start = Date.now()
const res = NextResponse.next({
request: {
headers: new Headers({ ...Object.fromEntries(req.headers), "x-trace-id": traceId }),
},
})
res.headers.set("x-trace-id", traceId)
// Log after response
const status = res.status
const durationMs = Date.now() - start
logger.info("http_request", {
traceId,
method: req.method,
route: req.nextUrl.pathname,
statusCode: status,
durationMs,
userAgent: req.headers.get("user-agent") ?? "",
country: req.geo?.country ?? "",
})
await logger.flush() // Required for Edge — no persistent process
return res
}
export const config = {
matcher: ["/api/:path*", "/((?!_next|favicon).*)"],
}
OpenTelemetry Tracing
// lib/axiom/tracing.ts — OTel tracing exported to Axiom
import { NodeSDK } from "@opentelemetry/sdk-node"
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"
import { Resource } from "@opentelemetry/resources"
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_VERSION } from "@opentelemetry/semantic-conventions"
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base"
import { HttpInstrumentation } from "@opentelemetry/instrumentation-http"
const exporter = new OTLPTraceExporter({
url: "https://api.axiom.co/v1/traces",
headers: {
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
"X-Axiom-Dataset": process.env.AXIOM_TRACES_DATASET ?? "app-traces",
},
})
export const sdk = new NodeSDK({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: process.env.SERVICE_NAME ?? "app",
[SEMRESATTRS_SERVICE_VERSION]: process.env.npm_package_version ?? "0.0.0",
}),
spanProcessor: new SimpleSpanProcessor(exporter),
instrumentations: [new HttpInstrumentation()],
})
// Call sdk.start() in instrumentation.ts (Next.js) or app startup
// import { trace } from "@opentelemetry/api"
// const tracer = trace.getTracer("my-service")
// const span = tracer.startSpan("database.query")
// span.setAttribute("db.statement", sql)
// span.end()
For the Sentry alternative when needing application error monitoring with stack trace capture, release tracking, session replay, performance monitoring with automatic instrumentation, and a debug-focused UI — Sentry is the gold standard for error tracking while Axiom is the tool when you need flexible structured log analytics with APL queries over your own event schema, see the Sentry guide. For the Datadog alternative when needing enterprise APM with distributed tracing, infrastructure metrics, anomaly detection, SLO tracking, and a managed SIEM — Datadog is a full-stack observability platform while Axiom is a cost-effective serverless log analytics layer that excels at storing and querying structured events from edge functions and serverless workloads, see the Datadog guide. The Claude Skills 360 bundle includes Axiom skill sets covering structured logging, APL analytics, and OTel tracing. Start with the free tier to try log analytics generation.