Claude Code for Dagster: Data Pipeline Orchestration — Claude Skills 360 Blog
Blog / AI / Claude Code for Dagster: Data Pipeline Orchestration
AI

Claude Code for Dagster: Data Pipeline Orchestration

Published: August 9, 2027
Read time: 6 min read
By: Claude Skills 360

Dagster is the data orchestrator built around software-defined assets — @asset decorators define what data you produce. from dagster import asset, AssetExecutionContext, Definitions, ScheduleDefinition, sensor gives you everything. @asset(deps=["upstream_asset"]) builds the DAG from dependencies. context.log.info("msg") for structured logging; context.add_output_metadata({"row_count": n}) attaches metadata shown in the UI. @multi_asset(outs={"users": AssetOut(), "orders": AssetOut()}) yields multiple outputs. Resources provide injectable dependencies: class PostgresResource(ConfigurableResource): url: str then with self.db_connection() as conn. IOManagers control storage: class S3IOManager(IOManager): def handle_output(self, context, obj) + def load_input. Schedules: ScheduleDefinition(job=refresh_job, cron_schedule="0 * * * *") triggers on a schedule. Sensors: @sensor(asset_selection=AssetSelection.assets("raw_events")) triggers on external events. Partitions: DailyPartitionsDefinition(start_date="2025-01-01"), @asset(partitions_def=daily_partitions), context.partition_key gives “2025-07-15”. dbt: load_assets_from_dbt_project(project_dir="dbt") turns dbt models into Dagster assets with full lineage. Definitions(assets=[...], resources={...}, schedules=[...], sensors=[...]) assembles the workspace. dagster dev starts the local UI. Claude Code generates Dagster asset pipelines, resource configurations, partitioned jobs, and dbt integrations.

CLAUDE.md for Dagster

## Dagster Stack
- Version: dagster >= 1.7, dagster-webserver for UI
- Assets: @asset(deps=[], group_name="ingestion", compute_kind="python")
- Resources: class MyResource(ConfigurableResource) — injected via definitions
- IOManager: class MyIOManager(IOManager) — handle_output + load_input
- Partitions: DailyPartitionsDefinition(start_date="2025-01-01")
- Definitions: Definitions(assets=all_assets, resources={"db": DbResource(url=...)})
- Run via: dagster dev (local) or dagster-daemon for schedules/sensors
- dbt: load_assets_from_dbt_project(project_dir="./dbt", profiles_dir="./dbt")

Core Asset Pipeline

# pipelines/assets/ingestion.py — raw data ingestion assets
from dagster import (
    asset,
    AssetExecutionContext,
    multi_asset,
    AssetOut,
    MaterializeResult,
    MetadataValue,
    DailyPartitionsDefinition,
)
from dagster_postgres import PostgresResource
import pandas as pd
import requests

daily_partitions = DailyPartitionsDefinition(start_date="2025-01-01")


@asset(
    group_name="ingestion",
    compute_kind="api",
    description="Raw events fetched from the application API",
)
def raw_events(context: AssetExecutionContext) -> pd.DataFrame:
    """Fetch raw events from the REST API."""
    api_key = context.resources.app_api.api_key
    base_url = context.resources.app_api.base_url

    resp = requests.get(
        f"{base_url}/api/admin/events",
        headers={"Authorization": f"Bearer {api_key}"},
        timeout=30,
    )
    resp.raise_for_status()
    data = resp.json()["data"]
    df = pd.DataFrame(data)

    context.add_output_metadata({
        "row_count":    MetadataValue.int(len(df)),
        "columns":      MetadataValue.json(list(df.columns)),
        "preview":      MetadataValue.md(df.head(5).to_markdown()),
    })

    return df


@asset(
    group_name="ingestion",
    compute_kind="api",
    partitions_def=daily_partitions,
    description="Daily event partition from API",
)
def raw_events_partitioned(context: AssetExecutionContext) -> pd.DataFrame:
    """Fetch events for a specific date partition."""
    date = context.partition_key  # "2025-07-15"
    api_key = context.resources.app_api.api_key
    base_url = context.resources.app_api.base_url

    resp = requests.get(
        f"{base_url}/api/admin/events",
        params={"date": date, "limit": 10000},
        headers={"Authorization": f"Bearer {api_key}"},
        timeout=60,
    )
    resp.raise_for_status()
    df = pd.DataFrame(resp.json()["data"])
    df["partition_date"] = date

    context.log.info(f"Fetched {len(df)} events for {date}")
    return df


@multi_asset(
    outs={
        "clean_users":  AssetOut(group_name="transform", description="Deduplicated user records"),
        "clean_orders": AssetOut(group_name="transform", description="Validated order records"),
    },
    deps=["raw_events"],
    compute_kind="pandas",
)
def split_events(
    context: AssetExecutionContext,
    raw_events: pd.DataFrame,
):
    """Split and clean raw events into typed tables."""
    users_df = (
        raw_events[raw_events["type"] == "user_created"]
        .drop_duplicates(subset=["userId"])
        .rename(columns={"userId": "id", "email": "email"})
    )

    orders_df = (
        raw_events[raw_events["type"] == "order_placed"]
        .dropna(subset=["orderId", "userId", "amount"])
    )

    context.log.info(f"Users: {len(users_df)}, Orders: {len(orders_df)}")
    return users_df, orders_df

Resources

# pipelines/resources.py — injectable Dagster resources
from dagster import ConfigurableResource, resource, InitResourceContext
from contextlib import contextmanager
from typing import Generator
import psycopg2
import psycopg2.extras
import requests


class AppApiResource(ConfigurableResource):
    """REST API client for the application."""
    base_url: str
    api_key:  str

    def get_json(self, path: str, **params) -> dict:
        resp = requests.get(
            f"{self.base_url}{path}",
            params=params,
            headers={"Authorization": f"Bearer {self.api_key}"},
            timeout=30,
        )
        resp.raise_for_status()
        return resp.json()

    def post_json(self, path: str, body: dict) -> dict:
        resp = requests.post(
            f"{self.base_url}{path}",
            json=body,
            headers={"Authorization": f"Bearer {self.api_key}"},
            timeout=30,
        )
        resp.raise_for_status()
        return resp.json()


class WarehouseResource(ConfigurableResource):
    """PostgreSQL data warehouse connection."""
    dsn: str  # postgresql://user:pass@host:5432/db

    @contextmanager
    def get_connection(self) -> Generator[psycopg2.extensions.connection, None, None]:
        conn = psycopg2.connect(self.dsn, cursor_factory=psycopg2.extras.RealDictCursor)
        try:
            yield conn
            conn.commit()
        except Exception:
            conn.rollback()
            raise
        finally:
            conn.close()

    def execute(self, sql: str, params=None) -> list[dict]:
        with self.get_connection() as conn:
            with conn.cursor() as cur:
                cur.execute(sql, params)
                if cur.description:
                    return [dict(row) for row in cur.fetchall()]
                return []

    def copy_from_df(self, df, table: str) -> int:
        """Bulk insert a DataFrame using COPY."""
        import io
        buf = io.StringIO()
        df.to_csv(buf, index=False, header=False)
        buf.seek(0)

        with self.get_connection() as conn:
            with conn.cursor() as cur:
                cur.copy_from(buf, table, sep=",", null="")
                return cur.rowcount

IOManager

# pipelines/io_managers.py — custom storage IOManager
from dagster import IOManager, OutputContext, InputContext, io_manager
import pandas as pd
import boto3
import io


class S3ParquetIOManager(IOManager):
    """Store DataFrames as Parquet in S3."""

    def __init__(self, bucket: str, prefix: str = "dagster"):
        self.bucket = bucket
        self.prefix = prefix
        self.s3 = boto3.client("s3")

    def _get_key(self, context: OutputContext | InputContext) -> str:
        parts = list(context.asset_key.path)
        if context.has_partition_key:
            parts.append(context.partition_key)
        return f"{self.prefix}/{'/'.join(parts)}.parquet"

    def handle_output(self, context: OutputContext, obj: pd.DataFrame) -> None:
        key = self._get_key(context)
        buf = io.BytesIO()
        obj.to_parquet(buf, index=False, compression="snappy")
        buf.seek(0)
        self.s3.put_object(Bucket=self.bucket, Key=key, Body=buf.getvalue())
        context.log.info(f"Wrote {len(obj)} rows to s3://{self.bucket}/{key}")

    def load_input(self, context: InputContext) -> pd.DataFrame:
        key = self._get_key(context)
        buf = io.BytesIO()
        self.s3.download_fileobj(self.bucket, key, buf)
        buf.seek(0)
        return pd.read_parquet(buf)


@io_manager
def s3_parquet_io_manager(init_context) -> S3ParquetIOManager:
    return S3ParquetIOManager(
        bucket=init_context.resource_config["bucket"],
        prefix=init_context.resource_config.get("prefix", "dagster"),
    )

Schedules and Sensors

# pipelines/schedules.py — schedules and sensors
from dagster import (
    ScheduleDefinition,
    DefaultScheduleStatus,
    sensor,
    SensorEvaluationContext,
    RunRequest,
    SkipReason,
    AssetSelection,
    define_asset_job,
)

# ── Jobs ──────────────────────────────────────────────────────────────────

daily_ingest_job = define_asset_job(
    name="daily_ingest",
    selection=AssetSelection.groups("ingestion", "transform"),
    tags={"team": "data-eng"},
)

hourly_metrics_job = define_asset_job(
    name="hourly_metrics",
    selection=AssetSelection.keys("user_metrics", "revenue_metrics"),
)

# ── Schedules ─────────────────────────────────────────────────────────────

daily_schedule = ScheduleDefinition(
    name="daily_pipeline",
    job=daily_ingest_job,
    cron_schedule="0 2 * * *",   # 2am UTC
    default_status=DefaultScheduleStatus.RUNNING,
)

hourly_schedule = ScheduleDefinition(
    name="hourly_metrics",
    job=hourly_metrics_job,
    cron_schedule="15 * * * *",
    default_status=DefaultScheduleStatus.RUNNING,
)

# ── Sensor ────────────────────────────────────────────────────────────────

@sensor(
    job=daily_ingest_job,
    minimum_interval_seconds=60,
)
def new_data_sensor(context: SensorEvaluationContext):
    """Trigger ingestion when new export files appear in S3."""
    import boto3
    s3 = boto3.client("s3")
    bucket = "my-exports"

    cursor = context.cursor or "0"
    last_ts = float(cursor)

    paginator = s3.get_paginator("list_objects_v2")
    new_keys  = []

    for page in paginator.paginate(Bucket=bucket, Prefix="exports/"):
        for obj in page.get("Contents", []):
            if obj["LastModified"].timestamp() > last_ts:
                new_keys.append(obj["Key"])

    if not new_keys:
        return SkipReason("No new export files found")

    new_cursor = str(max(
        s3.head_object(Bucket=bucket, Key=k)["LastModified"].timestamp()
        for k in new_keys
    ))

    return RunRequest(
        run_key=new_cursor,
        run_config={"ops": {"raw_events": {"config": {"keys": new_keys}}}},
        tags={"triggered_by": "s3_sensor", "file_count": str(len(new_keys))},
    )

Definitions Assembly

# pipelines/definitions.py — Dagster Definitions (workspace entry point)
import os
from dagster import Definitions, load_assets_from_modules
from dagster_dbt import load_assets_from_dbt_project

from . import assets
from .resources import AppApiResource, WarehouseResource
from .io_managers import s3_parquet_io_manager
from .schedules import daily_schedule, hourly_schedule, new_data_sensor

all_assets = load_assets_from_modules([assets])

# Load dbt models as Dagster assets (lineage auto-wired)
dbt_assets = load_assets_from_dbt_project(
    project_dir=os.environ.get("DBT_PROJECT_DIR", "./dbt"),
    profiles_dir=os.environ.get("DBT_PROFILES_DIR", "./dbt"),
    key_prefix="dbt",
)

defs = Definitions(
    assets    = [*all_assets, *dbt_assets],
    resources = {
        "app_api": AppApiResource(
            base_url = os.environ["APP_BASE_URL"],
            api_key  = os.environ["APP_API_KEY"],
        ),
        "warehouse": WarehouseResource(dsn=os.environ["WAREHOUSE_DSN"]),
        "io_manager": s3_parquet_io_manager.configured({
            "bucket": os.environ.get("S3_BUCKET", "my-datalake"),
            "prefix": "dagster",
        }),
    },
    schedules = [daily_schedule, hourly_schedule],
    sensors   = [new_data_sensor],
)

For the Prefect alternative when needing a Python-native orchestrator with a simpler imperative flow model (@flow and @task), built-in parallelism with .map(), automatic retries and caching, and first-class support for serverless execution without deploying a persistent daemon — Prefect optimizes for developer ergonomics and dynamic workflows while Dagster is built around the asset graph model with a richer UI for data lineage and data quality checks. For the Apache Airflow alternative when operating in an environment that standardizes on Airflow (Astronomer, MWAA, Cloud Composer) with a large ecosystem of existing DAGs and operators — Airflow is the incumbent task-based orchestrator while Dagster is the modern asset-based successor with better software engineering practices, type safety, and first-class partitioning. The Claude Skills 360 bundle includes Dagster skill sets covering asset pipelines, resources, partitions, and dbt integrations. Start with the free tier to try data pipeline generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free