Async Rust is uniquely powerful — zero-cost abstractions, no garbage collector, and true concurrency without data races. But the borrow checker interacts with async in non-obvious ways, and common patterns from other languages can cause deadlocks, panics, or significant performance issues. Claude Code generates correct async Rust by understanding ownership through await points, cancellation safety, and Tokio’s task model.
Tokio Foundation
CLAUDE.md for Async Rust Projects
## Async Rust Configuration
- Tokio 1.x with full features
- Runtime: multi-thread (default) unless specified
- Executor: tokio::main macro for binary entry points
- Testing: tokio::test for async tests
- Tracing: tracing crate (not println/eprintln)
## Critical Patterns
- Never hold std::sync::Mutex across .await — use tokio::sync::Mutex for async-shared state
- Always bound channels — use bounded mpsc, not unbounded (unbounded = unbounded memory)
- Cancellation: futures dropped at any .await point — design cancellation-safe operations
- Blocking work: use spawn_blocking for CPU-intensive or blocking I/O operations
- Clone Arc<T> before spawning tasks — move semantics, not references
## Common Pitfalls
- Deadlock: holding tokio::sync::Mutex then awaiting something that tries to acquire same lock
- Panic: calling .unwrap() on send to a closed channel (use .ok() or handle Err)
- Starvation: one task doing heavy CPU work blocking the async executor thread
Core Task Patterns
Fetch data from 10 URLs concurrently, collect results,
cancel all remaining requests if one fails.
use tokio::task::JoinSet;
use anyhow::Result;
async fn fetch_all_or_fail(urls: Vec<String>) -> Result<Vec<String>> {
let mut set = JoinSet::new();
for url in urls {
set.spawn(async move {
let client = reqwest::Client::new();
let response = client.get(&url).send().await?;
let text = response.text().await?;
Ok::<String, reqwest::Error>(text)
});
}
let mut results = Vec::new();
while let Some(result) = set.join_next().await {
match result {
Ok(Ok(text)) => results.push(text),
Ok(Err(e)) => {
// Cancel all remaining tasks — JoinSet drops them on drop
set.abort_all();
return Err(e.into());
}
Err(join_err) if join_err.is_cancelled() => continue,
Err(e) => return Err(e.into()),
}
}
Ok(results)
}
Channels and Message Passing
Build a worker pool that processes jobs from a queue.
Bounded concurrency: max 10 workers at a time.
use tokio::sync::{mpsc, Semaphore};
use std::sync::Arc;
use tracing::{info, error, instrument};
#[derive(Debug)]
struct Job {
id: u64,
payload: Vec<u8>,
}
#[derive(Debug)]
struct JobResult {
job_id: u64,
output: String,
}
async fn run_worker_pool(max_workers: usize) -> Result<()> {
let (job_tx, mut job_rx) = mpsc::channel::<Job>(1000);
let (result_tx, mut result_rx) = mpsc::channel::<JobResult>(1000);
// Semaphore limits concurrent workers
let sem = Arc::new(Semaphore::new(max_workers));
// Spawn the dispatcher task
let result_tx_clone = result_tx.clone();
let sem_clone = sem.clone();
tokio::spawn(async move {
while let Some(job) = job_rx.recv().await {
let permit = sem_clone.clone().acquire_owned().await.unwrap();
let result_tx = result_tx_clone.clone();
tokio::spawn(async move {
let _permit = permit; // Held until task completes
let result = process_job(&job).await;
if result_tx.send(JobResult {
job_id: job.id,
output: result,
}).await.is_err() {
error!("Result channel closed");
}
});
}
// All jobs sent — drop our result sender copy
drop(result_tx_clone);
});
// Spawn the result collector
tokio::spawn(async move {
while let Some(result) = result_rx.recv().await {
info!(job_id = result.job_id, "Job completed");
}
});
// Feed jobs
for i in 0..100u64 {
job_tx.send(Job { id: i, payload: vec![i as u8; 1024] }).await?;
}
drop(job_tx); // Signal no more jobs
Ok(())
}
#[instrument(skip(job))]
async fn process_job(job: &Job) -> String {
// Simulate work
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
format!("processed job {}", job.id)
}
select! for Timeouts and Cancellation
The external API sometimes never responds.
Add a timeout and a way to cancel in-flight requests from a control signal.
use tokio::{select, time};
use tokio::sync::watch;
struct ApiClient {
http: reqwest::Client,
shutdown_rx: watch::Receiver<bool>,
}
impl ApiClient {
async fn fetch_with_timeout(&mut self, url: &str) -> Result<String> {
let timeout = time::Duration::from_secs(10);
select! {
// Biased: check shutdown first (prevents starvation of other arms)
biased;
// Shutdown signal received — cancel immediately
_ = self.shutdown_rx.changed() => {
Err(anyhow::anyhow!("Operation cancelled: shutdown in progress"))
}
// Timed out
_ = time::sleep(timeout) => {
Err(anyhow::anyhow!("Request to {} timed out after {}s", url, timeout.as_secs()))
}
// Successful response
result = self.http.get(url).send() => {
let response = result?;
let text = response.text().await?;
Ok(text)
}
}
}
}
Async Streams
Process 1 million records from a database lazily —
don't load them all into memory at once.
use futures::stream::{self, StreamExt};
use tokio_postgres::Row;
async fn process_all_records(pool: &sqlx::PgPool) -> Result<u64> {
let mut stream = sqlx::query("SELECT id, data FROM records ORDER BY id")
.fetch(pool) // Returns a Stream — lazy, not all in memory
.map(|result| {
result.map(|row| Record {
id: row.get("id"),
data: row.get::<Vec<u8>, _>("data"),
})
});
let mut processed = 0u64;
// Process in chunks for efficiency (batch DB writes, etc.)
let mut batch = Vec::with_capacity(100);
while let Some(result) = stream.next().await {
match result {
Ok(record) => {
batch.push(record);
if batch.len() >= 100 {
process_batch(&batch, pool).await?;
processed += batch.len() as u64;
batch.clear();
}
}
Err(e) => return Err(e.into()),
}
}
// Flush remaining
if !batch.is_empty() {
process_batch(&batch, pool).await?;
processed += batch.len() as u64;
}
Ok(processed)
}
// Alternatively: use buffer_unordered for parallel processing
async fn process_parallel(pool: &sqlx::PgPool) -> Result<u64> {
let count = sqlx::query("SELECT id, data FROM records ORDER BY id")
.fetch(pool)
.map(|result| async move {
let record = result?;
process_record(record).await
})
.buffer_unordered(50) // Max 50 concurrent record processes
.fold(0u64, |count, result| async move {
match result {
Ok(_) => count + 1,
Err(e) => {
tracing::error!("Record processing failed: {e}");
count
}
}
})
.await;
Ok(count)
}
Avoiding the Common Pitfalls
Holding Mutex Across Await
My async function deadlocks intermittently.
I'm using std::sync::Mutex with async code.
// ❌ DEADLOCK: std::sync::Mutex held across .await
async fn bad_pattern(cache: Arc<Mutex<HashMap<String, String>>>, key: String) -> String {
let mut map = cache.lock().unwrap(); // Holds OS mutex
if let Some(value) = map.get(&key) {
return value.clone();
}
// ⚠️ Mutex is STILL held across this await
let value = fetch_from_api(&key).await; // Other tasks can't acquire the mutex
map.insert(key, value.clone());
value
}
// ✅ CORRECT: Release lock before awaiting
async fn good_pattern(cache: Arc<Mutex<HashMap<String, String>>>, key: String) -> String {
// Check cache without holding the lock across await
{
let map = cache.lock().unwrap();
if let Some(value) = map.get(&key) {
return value.clone();
}
} // Lock released here
let value = fetch_from_api(&key).await; // No lock held
let mut map = cache.lock().unwrap();
map.insert(key, value.clone());
value
}
// OR: Use tokio::sync::Mutex which IS safe across await points
async fn with_tokio_mutex(cache: Arc<tokio::sync::Mutex<HashMap<String, String>>>, key: String) -> String {
let mut map = cache.lock().await; // Async lock — yields to executor while waiting
if let Some(value) = map.get(&key) {
return value.clone();
}
let value = fetch_from_api(&key).await; // Safe: tokio Mutex can be held across await
map.insert(key, value.clone());
value
}
Blocking the Executor
// ❌ BAD: CPU-intensive work on async executor thread blocks other tasks
async fn compress_data(data: Vec<u8>) -> Vec<u8> {
// This blocks the executor thread for potentially seconds!
zstd::encode_all(&data[..], 0).unwrap()
}
// ✅ GOOD: Offload to blocking thread pool
async fn compress_data(data: Vec<u8>) -> Vec<u8> {
tokio::task::spawn_blocking(move || {
zstd::encode_all(&data[..], 0).unwrap()
})
.await
.unwrap()
}
For the Rust web framework that uses these async patterns, see the Axum guide. For eBPF and other systems programming where Rust competes with C, the eBPF guide shows the tradeoffs. The Claude Skills 360 bundle includes Rust skill sets covering async patterns, ownership in concurrent code, and Tokio tuning. Start with the free tier to try async Rust code generation.