Patterns That Work
After all the theory, here is the small set of idioms that handle the overwhelming majority of real cases. Memorize the shapes. When you reach for a pattern, reach for one of these first.
Pattern 1: 'static bounds when you can afford them
The fastest way out of a lifetime fight is to take a 'static bound. If your function takes ownership of its inputs (or they’re literals, or they’re Arc’d), you can write 'static everywhere and stop worrying about lifetime relations entirely.
#![allow(unused)]
fn main() {
fn enqueue<T: Send + 'static>(queue: &Queue<T>, item: T) {
queue.push(item);
}
}
This is right when:
- The function genuinely owns or shares ownership of the data.
- The data is going to outlive any individual scope (passed to a thread, stored in a long-lived structure, sent across a channel).
- You don’t need to maintain a borrow relationship between input and output.
This is wrong when:
- The data should be borrowed and returned (turn an input slice into a sub-slice). Don’t force ownership for this; thread the lifetime.
- The data is small and frequently constructed (don’t
Arcani64). - You’ll be cloning excessively just to satisfy the bound.
When in doubt: 'static is the easy answer when ownership is appropriate, and the wrong answer when borrowing is. The rule of thumb: if your function would be just as correct taking owned values, use 'static.
Pattern 2: Arc<T> for cheap shared ownership
Arc is the heat sink for “I want multiple owners and I don’t want to think about lifetimes.” Wrap the data once, clone the Arc per owner, accept the slight overhead of atomic refcounting and the loss of mutability (without an interior mutability primitive).
#![allow(unused)]
fn main() {
let config = Arc::new(load_config());
for _ in 0..workers {
let config = config.clone();
tokio::spawn(async move {
run_worker(config).await;
});
}
}
Arc is the right tool when:
- Multiple async tasks or threads need to read shared data.
- Lifetimes would otherwise force you into elaborate borrowing schemes.
- The data is large enough that cloning it would be wasteful.
Arc is the wrong tool when:
- The data is cheap to clone (
String,Vec<u8>of small size). Just clone. - You need mutability and aren’t ready to add a
Mutex(next pattern). - You’re using it inside a tight loop where the atomic operations matter (rare).
The cost is one atomic increment per clone and one atomic decrement per drop. For application code, this is negligible. For libraries that benchmark in nanoseconds, sometimes it isn’t.
Pattern 3: Arc<Mutex<T>> and when it’s wrong
The famous Arc<Mutex<T>> is the canonical “I need shared mutable state” pattern. It works. It is also the pattern most likely to disguise a worse problem.
#![allow(unused)]
fn main() {
let counter = Arc::new(Mutex::new(0u64));
for _ in 0..workers {
let counter = counter.clone();
tokio::spawn(async move {
let mut g = counter.lock().await;
*g += 1;
});
}
}
Arc<Mutex<T>> is the right tool when:
- The state is genuinely shared between many writers.
- The state is large enough that messages-passing alternatives would be slower.
- The locking is fine-grained enough that contention isn’t the bottleneck.
Arc<Mutex<T>> is the wrong tool when:
- A single producer and a single consumer can do the job (use a channel).
- The state is a counter (use
AtomicU64). - Contention is high and you have many short critical sections (consider sharding the data).
- You’re going to await while holding the lock and the lock is acquired by many tasks (you’ve now serialized them; consider whether the await is necessary inside the critical section).
The hidden trap with tokio::sync::Mutex is the third bullet. Holding a lock across .await blocks every other future trying to acquire it for the duration. If your critical section involves I/O, you have just turned a parallel system into a serial one. The fix is usually to do the I/O outside the lock, even if it means cloning data.
The decision rule: prefer std::sync::Mutex (or parking_lot::Mutex) and don’t hold it across .await. Use tokio::sync::Mutex only when you genuinely need to await while the lock is held, and accept that you have introduced a serialization point.
Pattern 4: Channel-based ownership transfer
When you have data that needs to flow from one task to another, pass it through a channel rather than sharing it. This eliminates lifetime issues entirely (the data is moved into the channel, then moved out) and makes the dependency graph explicit.
#![allow(unused)]
fn main() {
let (tx, mut rx) = tokio::sync::mpsc::channel(32);
tokio::spawn(async move {
while let Some(work) = rx.recv().await {
process(work).await;
}
});
for item in items {
tx.send(item).await.unwrap();
}
}
Channels are the right tool when:
- One task produces data, another consumes it.
- You want backpressure (use
mpsc::channelwith a bounded capacity). - The data flow is unidirectional or has a clear request/response shape (then use
oneshotfor the response). - You want clear ownership transitions in the type system.
Channels are the wrong tool when:
- You need bidirectional state synchronization (a channel is a one-way pipe; bidirectional ends up being two channels and a protocol, which is fine but more complex).
- The data is small and read-mostly (an
Arcis cheaper). - You need to implement a complex coordination pattern that channels would make awkward (consider explicit message-passing actors instead — pattern 5).
tokio::sync provides several channel flavors: mpsc (multi-producer single-consumer), broadcast (multi-consumer broadcast), watch (single-value, last-write-wins), and oneshot (one message, one consumer). Pick the one that matches the communication shape. Using the wrong channel for the pattern is a common source of awkward code.
Pattern 5: Actor pattern
When you have state that needs serialized access from many places, and the access patterns are complex enough that locking would be error-prone, encapsulate the state in a single task that owns it, and have everyone else send messages.
#![allow(unused)]
fn main() {
struct CacheActor { items: HashMap<String, Item> }
enum CacheMsg {
Get { key: String, reply: oneshot::Sender<Option<Item>> },
Put { key: String, value: Item },
}
async fn cache_actor(mut rx: mpsc::Receiver<CacheMsg>) {
let mut state = CacheActor { items: HashMap::new() };
while let Some(msg) = rx.recv().await {
match msg {
CacheMsg::Get { key, reply } => {
let _ = reply.send(state.items.get(&key).cloned());
}
CacheMsg::Put { key, value } => {
state.items.insert(key, value);
}
}
}
}
#[derive(Clone)]
struct CacheHandle { tx: mpsc::Sender<CacheMsg> }
impl CacheHandle {
async fn get(&self, key: String) -> Option<Item> {
let (reply, rx) = oneshot::channel();
self.tx.send(CacheMsg::Get { key, reply }).await.ok()?;
rx.await.ok()?
}
async fn put(&self, key: String, value: Item) {
let _ = self.tx.send(CacheMsg::Put { key, value }).await;
}
}
}
The actor pattern is right when:
- The state is non-trivial (composite, with invariants between fields).
- Multiple call sites need both read and write access.
- The state’s mutation is the system’s logical bottleneck anyway (one actor processes one message at a time).
- You want clear, type-checked APIs at the message boundary.
The actor pattern is wrong when:
- The state is simple enough that
Mutex<T>is fine. - You need fan-out parallelism on reads (an actor serializes everything; a
RwLocklets readers proceed concurrently). - The message-handling latency matters more than throughput (actors add a queue and a context switch per operation).
Actors are not faster than locks. They are cleaner. The clarity is the value: the type of CacheHandle is the API, the actor’s loop is the implementation, and there is no way for any other code to corrupt the cache because no other code has access to the state. For complex domain state, this is worth a lot.
Pattern 6: Cow for “borrowed unless I have to clone”
std::borrow::Cow<'a, T> (clone-on-write) holds either a borrow or an owned value, transparently. It’s useful when most calls don’t need to allocate but some do.
#![allow(unused)]
fn main() {
fn normalize(input: &str) -> Cow<'_, str> {
if input.contains('\r') {
Cow::Owned(input.replace("\r\n", "\n"))
} else {
Cow::Borrowed(input)
}
}
}
Cow is right when:
- Most callers don’t need to modify the data and you can return a borrow.
- A minority of cases need an owned copy.
- The caller doesn’t care which case they got.
Cow is wrong when:
- You’re using it to “decide later” between owned and borrowed because you weren’t sure. The decision should be in the API, not deferred.
- The caller needs to know whether they got a borrow or an owned value.
- The data is so cheap to clone that branching on
Cow::Owned/Cow::Borrowedis more expensive than just cloning.
Cow is one of those types that solves a real problem and gets reached for too often. The right test is: does this function actually have a fast path that returns a borrow and a slow path that returns an owned value, where the caller treats them the same? If yes, Cow. If no, pick a type and stick with it.
Pattern 7: 'static futures via owned data
For futures you want to spawn (which require 'static), the standard recipe is: take all data by value, share via Arc, no borrows.
#![allow(unused)]
fn main() {
async fn process(item: Item, deps: Arc<Deps>) -> Result<Output, Error> {
deps.client.fetch(item.id).await
}
tokio::spawn(process(item, deps.clone()));
}
This sidesteps every async lifetime issue from chapter 5. The future borrows nothing from the calling scope. It is 'static. It is Send (assuming all the types are). It can be spawned freely.
The cost is Arc::clone per spawn. The benefit is no lifetime annotations, no propagating borrows through a call graph, no mysterious '_ errors at the spawn site.
For most async application code, this is the default. Borrow only when you have a specific reason and have considered the alternative.
Pattern 8: The “give up and clone” rule
Sometimes you are fighting the borrow checker, the type system is right, and the fix is to just clone. This is a rule, not a defeat:
If you have spent more than ten minutes on a borrow checker error and the offending data would clone in under a microsecond, clone it.
Engineering time is more expensive than CPU time. A String::clone of a 100-byte string is a few hundred nanoseconds. An Arc::clone is two atomic operations. A Vec<u8>::clone of a 1KB buffer is a few microseconds. None of these matter in any code path that isn’t tight inner-loop.
The rule applies to:
- “I need this
Stringin two places.” → Clone. - “This struct contains a
Vecand I want to use it after passing one to a function.” → Clone theVec. - “I want to spawn a future borrowing
&self.” → TakeArc<Self>and clone theArc. - “I want to send this
Configto twelve workers.” → Wrap inArc, clone twelve times.
The rule does not apply to:
- Anything in a hot inner loop where you can profile the clone cost.
- Large data where the clone is genuinely expensive (a
Vec<u8>of a megabyte). - Cases where the lifetime structure is teaching you about a real design issue (which it sometimes is — see chapter 10).
The rule’s value is freeing you from a category of fights you don’t need to have. If the lifetime issue is real, you’ll come back to it; if it isn’t, you’ve moved on.
Combining patterns
Real code combines several of these. A typical async service might:
- Use
Arc<Config>(pattern 2) for read-mostly configuration. - Use an actor (pattern 5) for the serial-access state machine of a connection or session.
- Use
Arc<Mutex<Metrics>>(pattern 3) for high-throughput counters where contention is acceptable. - Pass
ItemandArc<Deps>to spawned tasks (pattern 7) to avoid borrow propagation. - Use channels (pattern 4) between major subsystems.
- Use
'staticbounds (pattern 1) on every public spawn-able function.
This is a standard production-Rust shape. None of it is exotic. Most of it is dictated by what the type system requires plus what’s idiomatic in tokio. After enough projects, this shape becomes the default, and you reach for it before reaching for an exotic lifetime annotation.
What this leaves out
These patterns do not cover:
- Lock-free algorithms (use
crossbeamand a copy of The Art of Multiprocessor Programming). - Custom executors (rare; if you’re writing one, you already know more than this book).
- Embedded async (different runtime model; see
embassyandembedded-hal). - High-performance networking (look at
glommio,monoio, or do the I/O yourself withmio).
Those domains have their own pattern languages. The patterns in this chapter cover application-level Rust, which is most of what most engineers write most of the time.
Sources
- Tokio’s tutorial, particularly the chapter on shared state, is the canonical introduction to several of these patterns.
- Alice Ryhl’s Actors with Tokio post is the standard reference for the actor pattern.
- Jon Gjengset’s Rust for Rustaceans has thorough treatment of
Arc,Mutex, and channel patterns in real code.