Triton writes custom GPU kernels in Python. pip install triton. @triton.jit marks a function as a GPU kernel. tl.program_id(axis=0) returns the current block index. Core: vector addition kernel: @triton.jit\ndef add_kernel(x_ptr, y_ptr, out_ptr, n, BLOCK: tl.constexpr):\n pid = tl.program_id(0); offsets = pid * BLOCK + tl.arange(0, BLOCK); mask = offsets < n\n x = tl.load(x_ptr + offsets, mask=mask); y = tl.load(y_ptr + offsets, mask=mask)\n tl.store(out_ptr + offsets, x + y, mask=mask). Launch: add_kernel[(n + 1023) // 1024,](x, y, out, n, BLOCK=1024). tl.constexpr marks compile-time constants for block sizes. Autotuner: @triton.autotune(configs=[triton.Config({"BLOCK_M": 128, "BLOCK_N": 256, "BLOCK_K": 64, "num_warps": 8}), ...], key=["M","N","K"]). 2D indexing for matmul: pid_m = tl.program_id(0); pid_n = tl.program_id(1). tl.dot(a_tile, b_tile) uses tensor cores automatically when dtypes are float16/bfloat16. tl.load(ptr, mask=mask, other=0.0) handles out-of-bounds safely. Fused softmax: load row, subtract max for numerical stability with tl.max(row, axis=0), compute exp(row - max), normalize by tl.sum(exp_row, axis=0). Layer norm: compute mean with tl.sum/n_cols, variance with tl.sum(diff**2)/n_cols, normalize, scale with weight/bias via tl.load. tl.atomic_add(ptr, val) for reductions across blocks. Debug: triton.testing.assert_close(torch_ref, triton_out). Bench: triton.testing.Benchmark(x_names=["N"], x_vals=[...], line_arg="provider", line_vals=["triton","torch"], ylabel="GB/s", plot_name="bandwidth"). Claude Code generates Triton kernels for attention, softmax, layer norm, matmul, and custom activation functions.
CLAUDE.md for Triton
## Triton Stack
- Version: triton >= 3.0 (ships with torch>=2.0 as torch._inductor.triton_ops)
- Kernel: @triton.jit fn(ptr, ..., BLOCK: tl.constexpr) → tl.load/store/dot/sum
- Launch: kernel[(grid,)](args, BLOCK=N) — grid is tuple of integers
- Autotuner: @triton.autotune(configs=[Config({...}, num_warps=N)], key=["M","N"])
- tl.constexpr: compile-time constants (block sizes, tile dims)
- tl.dot: tensor core matmul — requires float16/bfloat16, BLOCK multiple of 16
- Debug: triton.testing.assert_close(ref, out, atol=1e-2, rtol=1e-2)
Triton Kernels
# kernels/triton_ops.py — production Triton kernels with autotuning
from __future__ import annotations
import math
import torch
import triton
import triton.language as tl
# ── 1. Vector Addition (hello-world kernel) ───────────────────────────────────
@triton.jit
def _add_kernel(
x_ptr, y_ptr, out_ptr,
n_elements,
BLOCK_SIZE: tl.constexpr,
):
"""Elementwise addition: out = x + y."""
pid = tl.program_id(axis=0)
offsets = pid * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
tl.store(out_ptr + offsets, x + y, mask=mask)
def triton_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
assert x.shape == y.shape and x.is_cuda
out = torch.empty_like(x)
n = x.numel()
grid = lambda meta: (triton.cdiv(n, meta["BLOCK_SIZE"]),)
_add_kernel[grid](x, y, out, n, BLOCK_SIZE=1024)
return out
# ── 2. Fused Softmax ──────────────────────────────────────────────────────────
@triton.autotune(
configs=[
triton.Config({"BLOCK_SIZE": 512}),
triton.Config({"BLOCK_SIZE": 1024}),
triton.Config({"BLOCK_SIZE": 2048}),
],
key=["n_cols"],
)
@triton.jit
def _softmax_kernel(
output_ptr, input_ptr,
input_row_stride, output_row_stride,
n_cols,
BLOCK_SIZE: tl.constexpr,
):
"""
Numerically stable softmax over each row.
One program per row — runs all of BLOCK_SIZE columns in registers.
"""
row_idx = tl.program_id(0)
row_start_ptr = input_ptr + row_idx * input_row_stride
col_offsets = tl.arange(0, BLOCK_SIZE)
mask = col_offsets < n_cols
# Load row with -inf for out-of-bounds
row = tl.load(row_start_ptr + col_offsets, mask=mask, other=-float("inf"))
# Subtract max for numerical stability
row_max = tl.max(row, axis=0)
row = row - row_max
# Softmax
exp_row = tl.exp(row)
exp_sum = tl.sum(exp_row, axis=0)
softmax_row = exp_row / exp_sum
# Write output
out_row_ptr = output_ptr + row_idx * output_row_stride
tl.store(out_row_ptr + col_offsets, softmax_row, mask=mask)
def triton_softmax(x: torch.Tensor) -> torch.Tensor:
"""Fused softmax — avoids 3 PyTorch kernel launches."""
assert x.ndim == 2 and x.is_cuda
n_rows, n_cols = x.shape
# Pad BLOCK_SIZE to next power of 2 ≥ n_cols
BLOCK = triton.next_power_of_2(n_cols)
out = torch.empty_like(x)
_softmax_kernel[(n_rows,)](
out, x,
x.stride(0), out.stride(0),
n_cols, BLOCK_SIZE=BLOCK,
)
return out
# ── 3. Fused Layer Normalization ──────────────────────────────────────────────
@triton.jit
def _layer_norm_fwd_kernel(
X_ptr, W_ptr, B_ptr, Y_ptr,
Mean_ptr, Rstd_ptr,
stride, N,
eps,
BLOCK_SIZE: tl.constexpr,
):
"""Layer norm forward pass: Y = (X - mean) / std * W + B."""
row = tl.program_id(0)
X_ptr = X_ptr + row * stride
Y_ptr = Y_ptr + row * stride
# Load with masking
cols = tl.arange(0, BLOCK_SIZE)
mask = cols < N
x = tl.load(X_ptr + cols, mask=mask, other=0.0).to(tl.float32)
# Compute mean and variance in one pass
mean = tl.sum(x, axis=0) / N
diff = tl.where(mask, x - mean, 0.0)
var = tl.sum(diff * diff, axis=0) / N
rstd = 1.0 / tl.sqrt(var + eps)
# Normalize
x_hat = diff * rstd
w = tl.load(W_ptr + cols, mask=mask)
b = tl.load(B_ptr + cols, mask=mask)
y = x_hat * w + b
tl.store(Y_ptr + cols, y, mask=mask)
tl.store(Mean_ptr + row, mean)
tl.store(Rstd_ptr + row, rstd)
def triton_layer_norm(
x: torch.Tensor,
weight: torch.Tensor,
bias: torch.Tensor,
eps: float = 1e-5,
) -> torch.Tensor:
"""Fused layer norm — single kernel, no intermediate allocations."""
assert x.is_cuda and x.ndim == 2
M, N = x.shape
BLOCK = triton.next_power_of_2(N)
y = torch.empty_like(x)
mean = torch.empty(M, dtype=torch.float32, device=x.device)
rstd = torch.empty(M, dtype=torch.float32, device=x.device)
_layer_norm_fwd_kernel[(M,)](
x, weight, bias, y, mean, rstd,
x.stride(0), N, eps,
BLOCK_SIZE=BLOCK,
num_warps=8 if N >= 2048 else 4,
)
return y
# ── 4. Autotuned Matrix Multiplication ───────────────────────────────────────
@triton.autotune(
configs=[
triton.Config({"BLOCK_M": 128, "BLOCK_N": 256, "BLOCK_K": 64, "GROUP_M": 8},
num_warps=8, num_stages=3),
triton.Config({"BLOCK_M": 64, "BLOCK_N": 256, "BLOCK_K": 32, "GROUP_M": 8},
num_warps=4, num_stages=4),
triton.Config({"BLOCK_M": 128, "BLOCK_N": 128, "BLOCK_K": 32, "GROUP_M": 8},
num_warps=4, num_stages=4),
],
key=["M", "N", "K"],
)
@triton.jit
def _matmul_kernel(
A_ptr, B_ptr, C_ptr,
M, N, K,
stride_am, stride_ak,
stride_bk, stride_bn,
stride_cm, stride_cn,
BLOCK_M: tl.constexpr, BLOCK_N: tl.constexpr, BLOCK_K: tl.constexpr,
GROUP_M: tl.constexpr,
):
"""Autotuned matrix multiplication C = A @ B using tensor cores."""
# Grouped program ordering for L2 cache efficiency
pid = tl.program_id(0)
num_pid_m = tl.cdiv(M, BLOCK_M)
num_pid_n = tl.cdiv(N, BLOCK_N)
num_pid_in_group = GROUP_M * num_pid_n
group_id = pid // num_pid_in_group
first_pid_m = group_id * GROUP_M
group_size_m = min(num_pid_m - first_pid_m, GROUP_M)
pid_m = first_pid_m + (pid % group_size_m)
pid_n = (pid % num_pid_in_group) // group_size_m
# Block pointers
offs_m = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
offs_n = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
offs_k = tl.arange(0, BLOCK_K)
A_blk = A_ptr + offs_m[:, None] * stride_am + offs_k[None, :] * stride_ak
B_blk = B_ptr + offs_k[:, None] * stride_bk + offs_n[None, :] * stride_bn
# Accumulate in fp32 for precision
acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=tl.float32)
for k in range(0, tl.cdiv(K, BLOCK_K)):
a = tl.load(A_blk, mask=(offs_m[:, None] < M) & ((k * BLOCK_K + offs_k)[None, :] < K), other=0.0)
b = tl.load(B_blk, mask=((k * BLOCK_K + offs_k)[:, None] < K) & (offs_n[None, :] < N), other=0.0)
acc += tl.dot(a, b) # Tensor core matmul
A_blk += BLOCK_K * stride_ak
B_blk += BLOCK_K * stride_bk
c = acc.to(tl.float16)
C_blk = C_ptr + offs_m[:, None] * stride_cm + offs_n[None, :] * stride_cn
tl.store(C_blk, c, mask=(offs_m[:, None] < M) & (offs_n[None, :] < N))
def triton_matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
"""Autotuned fp16 matmul using Triton tensor cores."""
assert a.shape[1] == b.shape[0] and a.dtype == b.dtype == torch.float16
M, K = a.shape
K, N = b.shape
c = torch.empty((M, N), dtype=torch.float16, device=a.device)
grid = lambda meta: (triton.cdiv(M, meta["BLOCK_M"]) * triton.cdiv(N, meta["BLOCK_N"]),)
_matmul_kernel[grid](
a, b, c, M, N, K,
a.stride(0), a.stride(1),
b.stride(0), b.stride(1),
c.stride(0), c.stride(1),
)
return c
# ── Benchmarking ──────────────────────────────────────────────────────────────
@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=["N"],
x_vals=[256, 512, 1024, 2048, 4096, 8192],
line_arg="provider",
line_vals=["triton", "torch"],
line_names=["Triton", "PyTorch"],
styles=[("blue", "-"), ("red", "--")],
ylabel="GB/s",
plot_name="softmax-bandwidth",
args={},
)
)
def benchmark_softmax(N, provider):
x = torch.randn(4096, N, device="cuda", dtype=torch.float32)
quantiles = [0.5, 0.2, 0.8]
if provider == "triton":
ms, min_ms, max_ms = triton.testing.do_bench(lambda: triton_softmax(x), quantiles=quantiles)
else:
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.softmax(x, dim=-1), quantiles=quantiles)
gbps = lambda ms: 2 * x.numel() * x.element_size() * 1e-9 / (ms * 1e-3)
return gbps(ms), gbps(max_ms), gbps(min_ms)
For the CUDA C++ alternative when needing access to low-level CUDA features (warp-level primitives, PTX inline assembly, cooperative groups, shared memory bank conflict analysis) or custom tensor core layouts — CUDA C++ provides full hardware control while Triton abstracts the tiling and indexing math that accounts for most kernel complexity, letting Python ML engineers write efficient kernels without CUDA expertise. For the torch.compile / torch.inductor alternative when you want automatic kernel fusion of existing PyTorch operations without writing kernels — torch.compile generates Triton kernels automatically from PyTorch code while hand-written Triton kernels are needed when the automatic fusion misses opportunities or when implementing operations that aren’t expressible as compositions of PyTorch primitives (like the packed variable-length attention in Flash Attention). The Claude Skills 360 bundle includes Triton skill sets covering vector operations, fused softmax, layer norm, matmul with tensor cores, autotuner configs, and benchmarking harnesses. Start with the free tier to try custom GPU kernel generation.