Async Rust: Tokio and the Runtime Under Your Feet
Let's talk about what async Rust is actually doing, because if you don't understand the model, you will eventually write a bug that is very difficult to debug and very embarrassing to explain.
The Model in One Paragraph
Rust's async/await is syntactic sugar over a state machine. When you write async fn, the compiler transforms it into a type that implements the Future trait. That future does nothing until something polls it. The thing that polls futures is the async runtime — in our case, Tokio. Tokio maintains a thread pool and an I/O event loop (backed by epoll/kqueue/IOCP depending on your OS), and it drives your futures forward whenever they're ready to make progress.
This is not the same as Go's goroutines or Python's asyncio event loop, though the surface syntax looks similar. The key difference: in Rust, futures are lazy and zero-cost. You can create a million futures without allocating a million stacks. They just sit there until polled.
Adding Tokio
[dependencies]
tokio = { version = "1", features = ["full"] }
The "full" feature flag enables everything: the multi-threaded scheduler, timers, I/O, channels, sync primitives, the works. For production you might slim this down, but for learning and most applications, full is fine.
The Simplest Possible Async Program
#[tokio::main] async fn main() { println!("Hello from async land"); }
The #[tokio::main] macro desugars to roughly:
fn main() { tokio::runtime::Builder::new_multi_thread() .enable_all() .build() .unwrap() .block_on(async { println!("Hello from async land"); }); }
block_on is the bridge between synchronous main and the async world. It starts the Tokio runtime and drives the future you give it until completion. Everything else happens inside that runtime.
Spawning Tasks
The async model shines when you have concurrent work. Tasks are Tokio's unit of concurrency:
use tokio::time::{sleep, Duration}; #[tokio::main] async fn main() { let handle = tokio::spawn(async { sleep(Duration::from_millis(100)).await; println!("Task completed"); 42_u32 }); // Do other work here while the task runs... println!("Doing other things"); let result = handle.await.expect("task panicked"); println!("Task returned: {result}"); }
tokio::spawn returns a JoinHandle<T>. Awaiting it gives you Result<T, JoinError> — the error case happens if the task panicked. Panics in spawned tasks don't propagate to the spawner automatically; they get captured in the JoinError. This is important: a panic in a spawned task will not crash your server unless you explicitly propagate it.
The Big Rule: Don't Block the Async Runtime
Here is the thing that will bite you:
#![allow(unused)] fn main() { // DON'T DO THIS async fn bad_handler() { std::thread::sleep(std::time::Duration::from_secs(5)); // ← BLOCKS // While this is sleeping, this thread cannot poll OTHER futures // Your server just stopped handling requests } }
Tokio's multi-threaded runtime has a fixed number of worker threads (defaulting to the number of CPU cores). If you block one of them with std::thread::sleep, a synchronous I/O call, or a long CPU-bound computation, you're stealing a thread from the runtime. Block all threads and your entire service stalls.
The solutions:
For async I/O (network, files, database): use async equivalents. tokio::time::sleep instead of std::thread::sleep. tokio::fs::read instead of std::fs::read.
For CPU-bound work: use tokio::task::spawn_blocking:
#![allow(unused)] fn main() { async fn compute_hash(data: Vec<u8>) -> String { // spawn_blocking runs the closure on a dedicated blocking thread pool // separate from Tokio's async worker threads tokio::task::spawn_blocking(move || { // expensive CPU work here, blocking is fine use sha2::{Sha256, Digest}; let mut hasher = Sha256::new(); hasher.update(&data); format!("{:x}", hasher.finalize()) }) .await .expect("blocking task panicked") } }
spawn_blocking uses a separate thread pool sized for blocking operations. The default limit is 512 threads — enough to handle many concurrent blocking operations without exhausting OS resources.
Channels: Communication Between Tasks
Tokio provides async-aware versions of the standard channel types:
use tokio::sync::mpsc; #[tokio::main] async fn main() { // Multi-producer, single-consumer channel let (tx, mut rx) = mpsc::channel::<String>(32); // buffer size 32 // Spawn a producer let tx_clone = tx.clone(); tokio::spawn(async move { for i in 0..5 { tx_clone.send(format!("message {i}")).await.unwrap(); } }); // Drop original sender so receiver knows when senders are gone drop(tx); // Consume messages while let Some(msg) = rx.recv().await { println!("Got: {msg}"); } println!("Channel closed"); }
When all senders are dropped, rx.recv() returns None. This is how you signal shutdown.
Tokio's other channel types:
| Type | Use case |
|---|---|
mpsc | Multiple producers, one consumer |
oneshot | Single value, single sender |
broadcast | Multiple producers, multiple consumers, every receiver gets every message |
watch | Broadcast the latest value (new receivers see current state) |
watch is particularly useful for configuration that can change at runtime, or for broadcasting shutdown signals:
use tokio::sync::watch; #[tokio::main] async fn main() { let (shutdown_tx, mut shutdown_rx) = watch::channel(false); tokio::spawn(async move { // Simulate shutdown signal after 1 second tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; shutdown_tx.send(true).unwrap(); }); // Wait for shutdown signal loop { tokio::select! { _ = shutdown_rx.changed() => { if *shutdown_rx.borrow() { println!("Shutting down"); break; } } } } }
select!: Concurrency Within a Task
tokio::select! lets you race multiple futures and act on whichever completes first:
#![allow(unused)] fn main() { use tokio::time::{sleep, Duration}; async fn fetch_data() -> String { sleep(Duration::from_millis(200)).await; "data".to_string() } async fn fetch_with_timeout() -> Option<String> { tokio::select! { data = fetch_data() => Some(data), _ = sleep(Duration::from_millis(100)) => { println!("Timed out waiting for data"); None } } } }
The branches that don't win are cancelled. This is one of the places where Rust's cancellation model differs from most languages: cancellation is cooperative and happens at .await points. There's no forced thread interruption. This is safe, but it means you need to think about what state you leave behind when a future is dropped mid-execution.
Shared State with Mutex
When multiple tasks need mutable access to the same data:
#![allow(unused)] fn main() { use std::sync::Arc; use tokio::sync::Mutex; use std::collections::HashMap; type SharedCache = Arc<Mutex<HashMap<String, String>>>; async fn get_or_set(cache: SharedCache, key: String, value: String) -> String { let mut map = cache.lock().await; map.entry(key) .or_insert(value) .clone() } }
Note tokio::sync::Mutex, not std::sync::Mutex. The Tokio version is async-aware — it won't block the thread while waiting for the lock, it will yield back to the runtime. Use std::sync::Mutex only when you hold the lock for a very short time and never across an await point. (The borrow checker will stop you from accidentally doing the latter with std::sync::Mutex, which is a nice safety net.)
RwLock is available too, for read-heavy workloads where multiple concurrent readers should be allowed.
The Tokio Runtime Configuration
The default #[tokio::main] creates a multi-threaded runtime. You can configure it explicitly:
fn main() { tokio::runtime::Builder::new_multi_thread() .worker_threads(4) // Fix the thread count .max_blocking_threads(64) // Limit blocking thread pool .thread_name("my-runtime") // Useful for profiling .enable_all() .build() .expect("Failed to build runtime") .block_on(run()); } async fn run() { // your application here }
For tests and single-threaded applications, there's also new_current_thread(), which runs everything on a single thread and is deterministic. Useful for testing, bad for production.
What You Actually Need to Remember
- Futures do nothing until polled. The runtime polls them.
- Don't block worker threads — use
spawn_blockingfor CPU-bound or sync I/O work. - Panics in spawned tasks are captured in
JoinHandle. Handle them. tokio::select!races futures; cancelled branches stop at their next.await.- Use
tokio::sync::Mutexwhen you hold the lock across an.await. Usestd::sync::Mutexotherwise.
The next chapter builds an HTTP server on top of all of this. The async foundations matter there — every request is a task, every handler is an async function, and Axum's whole extractor machinery is built on Future.