Claude Code for Elixir Nx and Axon: Machine Learning in Elixir — Claude Skills 360 Blog
Blog / Backend / Claude Code for Elixir Nx and Axon: Machine Learning in Elixir
Backend

Claude Code for Elixir Nx and Axon: Machine Learning in Elixir

Published: February 7, 2027
Read time: 9 min read
By: Claude Skills 360

Nx (Numerical Elixir) brings multi-dimensional tensor operations and JIT compilation to Elixir. defn functions compile to optimized native code via EXLA (XLA backend) or EMLX (Metal/GPU on Apple). Axon builds neural networks as composable data structures that compile to Nx operations — any Nx backend runs the same Axon model. Bumblebee loads HuggingFace models directly in Elixir — BERT, CLIP, Whisper — without Python. Nx.Serving batches concurrent inference requests automatically, routing multiple GenServer calls into single forward passes. Explorer provides a DataFrame API similar to pandas, backed by Polars. Livebook notebooks run interactive Nx/Axon experiments with live graph visualization. Claude Code generates Nx tensor pipelines, Axon model definitions, Bumblebee serving configurations, Explorer data transformations, and the supervised training loops for Elixir ML applications.

CLAUDE.md for Nx/Axon Projects

## Nx/Axon Stack
- Version: nx >= 0.9, axon >= 0.6, bumblebee >= 0.5, exla >= 0.9, explorer >= 0.9
- Backend: EXLA.set_as_default_backend() for GPU — or {EXLA.Backend, client: :host}
- defn: @defn for JIT-compiled numerical functions — runs on EXLA/CPU/GPU
- Axon: Axon.input/dense/conv/relu etc. — builds computation graph, compiled on first run
- Bumblebee: Bumblebee.load_model/load_tokenizer from HuggingFace Hub
- Serving: Nx.Serving.batched_run — concurrent inference with automatic batching
- Explorer: Explorer.DataFrame.from_csv/lazy queries/group_by

Nx Tensor Operations

# lib/tensor_ops.ex — Nx defn for JIT-compiled numerical code
defmodule OrderAnalytics.TensorOps do
  import Nx.Defn

  # defn: JIT-compiled to EXLA/native code
  defn normalize_features(features) do
    mean = Nx.mean(features, axes: [0])
    std = Nx.standard_deviation(features, axes: [0])
    (features - mean) / (std + 1.0e-8)
  end

  defn cosine_similarity(a, b) do
    # Batch cosine similarity — a: {n, d}, b: {m, d}
    a_norm = Nx.l2_normalize(a, axis: 1)
    b_norm = Nx.l2_normalize(b, axis: 1)
    # Matrix multiply for pairwise similarity
    Nx.dot(a_norm, [1], b_norm, [1])
  end

  defn softmax_cross_entropy(logits, labels) do
    log_probs = Nx.log_softmax(logits, axis: -1)
    -Nx.sum(labels * log_probs, axes: [-1])
  end

  # Gradient via Nx.Defn.grad
  defn loss_and_grad(model_fn, params, batch) do
    {loss, grad} = Nx.Defn.value_and_grad(
      fn p -> model_fn.(p, batch) end,
      params
    )
    {loss, grad}
  end
end

Axon Neural Network

# lib/order_classifier.ex — Axon model definition and training
defmodule OrderAnalytics.OrderClassifier do
  @moduledoc """
  Binary classifier: predict if an order will be returned (churn signal).
  Features: total_cents, item_count, days_since_last_order, customer_age_days
  """

  def build_model(input_features) do
    Axon.input("features", shape: {nil, input_features})
    |> Axon.dense(128, activation: :relu)
    |> Axon.dropout(rate: 0.3)
    |> Axon.dense(64, activation: :relu)
    |> Axon.dropout(rate: 0.2)
    |> Axon.dense(32, activation: :relu)
    |> Axon.dense(2, activation: :softmax)
  end

  def train(model, train_data, opts \\ []) do
    epochs = Keyword.get(opts, :epochs, 50)
    learning_rate = Keyword.get(opts, :learning_rate, 0.001)

    {init_fn, predict_fn} = Axon.build(model, compiler: EXLA)

    optimizer = Polaris.Optimizers.adam(learning_rate: learning_rate)

    loop =
      model
      |> Axon.Loop.trainer(
        :categorical_cross_entropy,
        optimizer,
        log: 5  # Log every 5 epochs
      )
      |> Axon.Loop.metric(:accuracy)
      |> Axon.Loop.validate(model, build_val_data(train_data))
      |> Axon.Loop.early_stop("validation_loss", mode: :min, patience: 5)

    Axon.Loop.run(loop, train_data, %{}, epochs: epochs, compiler: EXLA)
  end

  def predict(model, trained_state, features) do
    {_init_fn, predict_fn} = Axon.build(model, compiler: EXLA)
    predict_fn.(trained_state.model_state, %{"features" => features})
  end

  defp build_val_data(train_data) do
    # Take last 20% of training data for validation
    count = Enum.count(train_data)
    val_count = div(count, 5)
    Enum.take(train_data, -val_count)
  end
end

Bumblebee Serving

# lib/text_classifier.ex — Bumblebee HuggingFace model serving
defmodule OrderAnalytics.TextClassifier do
  @moduledoc """
  Customer support message classifier using distilbert from HuggingFace.
  Classifies messages as: refund_request, shipping_issue, product_question, other
  """

  def start_serving do
    # Load from HuggingFace Hub (cached locally after first download)
    {:ok, model_info} = Bumblebee.load_model(
      {:hf, "distilbert-base-uncased-finetuned-sst-2-english"},
      backend: EXLA.Backend
    )

    {:ok, tokenizer} = Bumblebee.load_tokenizer(
      {:hf, "distilbert-base-uncased"}
    )

    # Build serving — handles batching and tokenization
    serving = Bumblebee.Text.text_classification(
      model_info,
      tokenizer,
      compile: [batch_size: 8, sequence_length: 128],
      defn_options: [compiler: EXLA]
    )

    # Start as supervised Nx.Serving GenServer
    Nx.Serving.start_link(
      name: __MODULE__,
      serving: serving,
      batch_timeout: 50  # Wait up to 50ms to fill batch
    )
  end

  def classify(text) when is_binary(text) do
    # Concurrent calls are automatically batched
    Nx.Serving.run(__MODULE__, text)
  end

  def classify_batch(texts) when is_list(texts) do
    texts
    |> Enum.map(fn text ->
      Task.async(fn -> classify(text) end)
    end)
    |> Task.await_many(5_000)
  end
end
# lib/application.ex — supervise the serving process
defmodule OrderAnalytics.Application do
  use Application

  def start(_type, _args) do
    children = [
      # Nx.Serving supervised as a named process
      {OrderAnalytics.TextClassifier, []},
      OrderAnalyticsWeb.Endpoint
    ]

    Supervisor.start_link(children, strategy: :one_for_one)
  end
end

Explorer DataFrames

# lib/data_pipeline.ex — Explorer DataFrame for analytics
defmodule OrderAnalytics.DataPipeline do
  alias Explorer.DataFrame, as: DF
  alias Explorer.Series

  def compute_customer_features(orders_path) do
    # Lazy evaluation — builds query plan, executes at collect()
    df =
      DF.from_csv!(orders_path)
      |> DF.filter(col("status") != "cancelled")
      |> DF.mutate(
        amount_dollars: col("total_cents") / 1000,
        days_since_created: Series.cast(
          Series.subtract(
            Series.from_list([Date.utc_today()]),
            Series.cast(col("created_at"), :date)
          ),
          :integer
        )
      )
      |> DF.group_by("customer_id")
      |> DF.summarise(
        order_count: count(col("id")),
        total_spent: sum(col("amount_dollars")),
        avg_order_value: mean(col("amount_dollars")),
        days_since_last_order: min(col("days_since_created")),
        max_order_value: max(col("amount_dollars"))
      )
      |> DF.filter(col("order_count") >= 1)
      |> DF.arrange(desc: col("total_spent"))
      |> DF.collect()  # Execute the lazy query

    df
  end

  def to_nx_tensor(df, feature_columns) do
    # Convert Explorer DataFrame columns to Nx tensor for ML
    features =
      feature_columns
      |> Enum.map(fn col_name ->
        df[col_name]
        |> Series.to_tensor()
        |> Nx.reshape({:auto, 1})
      end)
      |> Nx.concatenate(axis: 1)

    Nx.as_type(features, :f32)
  end

  def train_pipeline(orders_path) do
    # Full pipeline: data → features → model
    feature_cols = [
      "order_count",
      "total_spent",
      "avg_order_value",
      "days_since_last_order"
    ]

    features_df = compute_customer_features(orders_path)
    features_tensor = to_nx_tensor(features_df, feature_cols)

    # Normalize
    normalized = OrderAnalytics.TensorOps.normalize_features(features_tensor)

    # Return batched dataset
    normalized
    |> Nx.to_batched(32)
    |> Enum.map(fn batch ->
      %{"features" => batch}
    end)
  end
end

For the Python PyTorch/HuggingFace ecosystem when Elixir-native ML isn’t required and Python tooling like Weights & Biases or ONNX export is needed for model management, see the HuggingFace Transformers guide for fine-tuning and export workflows. For the Elixir LiveView integration that connects real-time model predictions to a Phoenix web interface with live updates, see the Elixir LiveView guide for PubSub and stream patterns. The Claude Skills 360 bundle includes Elixir Nx skill sets covering tensor operations, Axon training loops, and Bumblebee serving. Start with the free tier to try Nx/Axon pipeline generation.

Keep Reading

Backend

Claude Code for Bun: Fast JavaScript Runtime and Toolkit

Build with Bun and Claude Code — Bun.serve for HTTP servers, Bun.file for fast file I/O, Bun.$ for shell commands, Bun.sql for SQLite and PostgreSQL, Bun.build for bundling, bun:test for testing, Bun.hash for hashing, bun.lock for deterministic installs, bun run for package.json scripts, hot reloading with --hot, bun init for project scaffolding, and compatibility with Node.js modules.

6 min read Jun 13, 2027
Backend

Claude Code for Express.js Advanced: Patterns for Production APIs

Advanced Express.js patterns with Claude Code — typed request handlers with RequestHandler generics, async error handling middleware, Zod validation middleware factory, rate limiting with express-rate-limit and Redis store, helmet security middleware, compression, dependency injection with tsyringe, file upload with multer and S3, pagination utilities, JWT middleware, and structured logging with pino.

6 min read Jun 8, 2027
Backend

Claude Code for KeystoneJS: Node.js CMS and App Framework

Build full-stack apps with KeystoneJS and Claude Code — config with lists, fields.text and fields.relationship for schema definition, access control with isAuthenticated and isAdmin functions, hooks with beforeOperation and afterOperation, GraphQL API auto-generation from schema, AdminUI for content management, session with statelessSessions, Prisma adapter for database, file storage with images and files fields, and custom REST endpoints.

6 min read Jun 7, 2027

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free