MotherDuck brings DuckDB to the cloud with serverless persistence — new Database("md:?motherduck_token=TOKEN") or ATTACH 'md:?motherduck_token=TOKEN' AS md connects to MotherDuck from a local DuckDB instance. DuckDB runs in-process: new Database(":memory:") for an ephemeral database. db.all("SELECT * FROM read_parquet('s3://bucket/data/**/*.parquet') LIMIT 100") queries Parquet directly. read_csv("url.csv") and read_json("file.json") scan structured files. ATTACH 'md:mydb' links a MotherDuck cloud database — USE md:mydb switches context, then all SQL runs on the cloud database. CREATE TABLE analytics AS SELECT ... FROM read_parquet(...) materializes a result. Window functions: OVER (PARTITION BY ... ORDER BY ... ROWS BETWEEN ...). COPY (SELECT ...) TO 'out.parquet' (FORMAT PARQUET) exports. Node.js: import duckdb from "duckdb", const db = new duckdb.Database("md:mydb?motherduck_token=TOKEN"). DuckDB WASM in the browser: import * as duckdb from "@duckdb/duckdb-wasm", instantiate with a Web Worker, await conn.query("SELECT ...") in-browser analytics. md share mydb creates a shareable read-only link. Claude Code generates MotherDuck analytics queries, DuckDB Node.js integrations, and Parquet data pipelines.
CLAUDE.md for MotherDuck / DuckDB
## MotherDuck / DuckDB Stack
- Node.js: duckdb >= 1.x (native binding) or @duckdb/node-api (newer typed API)
- Connect MotherDuck: new duckdb.Database("md:mydb?motherduck_token=" + TOKEN)
- In-memory: new duckdb.Database(":memory:") — ephemeral, no persistence
- Query files: SELECT * FROM read_parquet('path/**/*.parquet') or read_csv('file.csv')
- Cloud attach: ATTACH 'md:?motherduck_token=TOKEN' AS md; USE md:mydb
- WASM browser: @duckdb/duckdb-wasm — worker-based, no Node.js required
- httpfs: INSTALL httpfs; LOAD httpfs; then SELECT * FROM read_parquet('https://url/file.parquet')
DuckDB / MotherDuck Client
// lib/motherduck/client.ts — DuckDB/MotherDuck analytics client
import duckdb from "duckdb"
import { promisify } from "util"
type Row = Record<string, unknown>
function createDb(): duckdb.Database {
const token = process.env.MOTHERDUCK_TOKEN
if (token) {
// Connect to MotherDuck cloud database
return new duckdb.Database(`md:analytics?motherduck_token=${token}`)
}
// Fallback: local in-memory DuckDB
return new duckdb.Database(":memory:")
}
// Singleton DB connection
const globalDuck = globalThis as unknown as { duckDb?: duckdb.Database }
function getDb(): duckdb.Database {
if (!globalDuck.duckDb) globalDuck.duckDb = createDb()
return globalDuck.duckDb
}
/** Execute a SELECT query and return typed rows */
export async function queryRows<T extends Row = Row>(
sql: string,
params: unknown[] = [],
): Promise<T[]> {
const db = getDb()
const conn = db.connect()
const all = promisify(conn.all.bind(conn))
try {
const rows = await all(sql, ...params) as T[]
return rows
} finally {
conn.close()
}
}
/** Execute a non-SELECT statement (CREATE, INSERT, COPY, etc.) */
export async function execute(sql: string): Promise<void> {
const db = getDb()
const conn = db.connect()
const run = promisify(conn.run.bind(conn))
try {
await run(sql)
} finally {
conn.close()
}
}
// ── Analytics helpers ──────────────────────────────────────────────────────
/** Query a remote Parquet file or glob pattern directly */
export async function queryParquet<T extends Row>(
urlOrPath: string,
sql: string,
): Promise<T[]> {
// Install and load httpfs if using remote URL
if (urlOrPath.startsWith("http") || urlOrPath.startsWith("s3://")) {
await execute("INSTALL httpfs; LOAD httpfs;")
}
const fullSql = sql.replace("FROM __PARQUET__", `FROM read_parquet('${urlOrPath}')`)
return queryRows<T>(fullSql)
}
export type TimeseriesRow = { period: string; value: number; dimension?: string }
/** Aggregate time series with configurable bucketing */
export async function timeseries(options: {
table: string
timeCol: string
valueCol: string
agg?: "sum" | "avg" | "count" | "min" | "max"
bucket?: "hour" | "day" | "week" | "month"
where?: string
limit?: number
}): Promise<TimeseriesRow[]> {
const {
table, timeCol, valueCol,
agg = "sum", bucket = "day",
where, limit = 90,
} = options
const trunc = bucket === "hour" ? `date_trunc('hour', ${timeCol})`
: bucket === "week" ? `date_trunc('week', ${timeCol})`
: bucket === "month" ? `date_trunc('month', ${timeCol})`
: `${timeCol}::DATE`
const sql = `
SELECT
${trunc}::VARCHAR AS period,
${agg}(${valueCol}) AS value
FROM ${table}
${where ? `WHERE ${where}` : ""}
GROUP BY 1
ORDER BY 1 DESC
LIMIT ${limit}
`
return queryRows<TimeseriesRow>(sql)
}
/** Run a windowed moving average */
export async function movingAverage(options: {
table: string
timeCol: string
valueCol: string
windowDays: number
where?: string
}): Promise<Array<{ period: string; value: number; moving_avg: number }>> {
const { table, timeCol, valueCol, windowDays, where } = options
const sql = `
WITH daily AS (
SELECT
${timeCol}::DATE AS period,
sum(${valueCol}) AS value
FROM ${table}
${where ? `WHERE ${where}` : ""}
GROUP BY 1
ORDER BY 1
)
SELECT
period,
value,
avg(value) OVER (
ORDER BY period
ROWS BETWEEN ${windowDays - 1} PRECEDING AND CURRENT ROW
) AS moving_avg
FROM daily
ORDER BY period
`
return queryRows(sql)
}
Parquet Data Pipeline
// scripts/ingest-events.ts — ingest events to MotherDuck via Parquet
import * as fs from "fs"
import * as path from "path"
import { execute, queryRows } from "../lib/motherduck/client"
async function ingestDailyEvents(date: string): Promise<void> {
// Download Parquet file from S3/GCS (external step, shown here via local path)
const filePath = path.join("/tmp", `events_${date}.parquet`)
// Install S3 httpfs support
await execute(`
INSTALL httpfs; LOAD httpfs;
SET s3_region = '${process.env.AWS_REGION ?? "us-east-1"}';
SET s3_access_key_id = '${process.env.AWS_ACCESS_KEY_ID}';
SET s3_secret_access_key = '${process.env.AWS_SECRET_ACCESS_KEY}';
`)
// Create or replace the daily partition
await execute(`
CREATE OR REPLACE TABLE events_${date.replace(/-/g, "_")} AS
SELECT *
FROM read_parquet('s3://my-data-lake/events/date=${date}/*.parquet')
`)
console.log(`Ingested events for ${date}`)
// Verify row count
const rows = await queryRows<{ n: number }>(
`SELECT count(*) AS n FROM events_${date.replace(/-/g, "_")}`
)
console.log(`Row count: ${rows[0].n}`)
}
const date = process.argv[2] ?? new Date().toISOString().slice(0, 10)
ingestDailyEvents(date).catch(console.error)
Next.js Analytics API
// app/api/analytics/timeseries/route.ts — server analytics with DuckDB
import { NextResponse } from "next/server"
import { timeseries } from "@/lib/motherduck/client"
import { auth } from "@/lib/auth"
export async function GET(req: Request) {
const session = await auth()
if (!session) return NextResponse.json({ error: "Unauthorized" }, { status: 401 })
const url = new URL(req.url)
const table = url.searchParams.get("table") ?? "events"
const bucket = (url.searchParams.get("bucket") ?? "day") as "day" | "week" | "month"
const days = parseInt(url.searchParams.get("days") ?? "30")
const data = await timeseries({
table,
timeCol: "created_at",
valueCol: "revenue",
agg: "sum",
bucket,
where: `created_at >= current_date - INTERVAL '${days} days'`,
limit: days,
})
return NextResponse.json({ data })
}
For the ClickHouse alternative when needing a purpose-built distributed OLAP database with sub-second queries on billions of rows, materialized views for real-time pre-aggregation, and horizontal scaling for high-ingest event streams — ClickHouse is a remote server OLAP engine while DuckDB/MotherDuck runs in-process with no server, making it ideal for analytical queries on datasets that fit on one machine or in serverless compute, see the ClickHouse guide. For the BigQuery alternative when operating within Google Cloud, needing petabyte-scale serverless SQL with automatic scaling, BI connectors, ML integration via BigQuery ML, and the GCP IAM permission model — BigQuery handles truly massive warehouse workloads while MotherDuck excels when you want fast local or serverless analytics without paying per-query on a large cloud warehouse, see the BigQuery guide. The Claude Skills 360 bundle includes MotherDuck skill sets covering DuckDB queries, Parquet pipelines, and serverless analytics. Start with the free tier to try in-process OLAP generation.