Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

When the Type System Is Right and You’re Wrong

Sometimes the borrow checker rejects code that has a real bug. The bug is not “the lifetime annotations are wrong”; the bug is that the program, as written, would do something subtly broken if the compiler let it through. The lifetime annotations are how the compiler is describing the bug.

This chapter is about those cases. Not the cases where you fight the compiler and lose; the cases where you should fight the compiler and you end up grateful when you lose.

The trap: assuming the compiler is being pedantic

Most discussions of borrow checker errors carry an undercurrent of “the compiler is technically right but pragmatically wrong.” Sometimes that’s accurate. Often it isn’t. The lifetime rules and the trait solver are designed to enforce memory safety and data race freedom, and those are genuine concerns even when you don’t immediately see how they apply.

The first reflex you have to develop is: when the compiler rejects something, ask what could go wrong if it accepted it. Sometimes the answer is “nothing in this specific case, the compiler is over-approximating.” Other times the answer is “oh.”

Here are the genres of “oh.”

Iterator invalidation

#![allow(unused)]
fn main() {
let mut items = vec![1, 2, 3, 4];
for item in &items {
    if *item == 2 {
        items.push(5);
    }
}
}

The borrow checker rejects this:

error[E0502]: cannot borrow `items` as mutable because it is also borrowed as immutable

A C++ developer’s first reaction is “but it’s just a push, what’s the harm?” The harm is iterator invalidation. items.push(5) may reallocate the vector’s backing storage. After reallocation, the iterator (which is a pointer into the old storage) is dangling. Reading *item on the next iteration is undefined behavior.

In C++, this is one of the most common sources of memory corruption bugs. In Rust, the borrow checker makes it impossible. The lifetime annotation that says “the iterator borrows from the vector, so the vector cannot be mutated while the iterator exists” is enforcing a real safety property, not being prissy.

The fix is structural: collect the indices to modify, then modify, then re-iterate. Or use iter_mut if you only need to mutate elements (not change the structure).

The lesson: when a borrow checker error shows up at a “mutate while iterating” pattern, the compiler is telling you something true. Even if you “know” the mutation is safe in this specific case (e.g., a fixed-size array), the rule that protects you is worth keeping.

Holding a guard across an await

#![allow(unused)]
fn main() {
async fn process(state: Arc<Mutex<State>>) -> Result<Output, Error> {
    let mut g = state.lock().await;
    let response = http_get(&g.url).await?;  // <- holding lock across await
    g.last_response = Some(response.clone());
    Ok(response.into())
}
}

This compiles, but if you write the multi-task version where many tasks call process concurrently, you have just serialized them. Every task takes the lock, makes the HTTP call, and only then releases. Concurrency is gone.

This is not a borrow checker error — it compiles fine. But it is a case where the type system gave you a tool (tokio::sync::Mutex with MutexGuard held across .await) that you used wrongly. The fix is to not hold the lock across the await:

#![allow(unused)]
fn main() {
async fn process(state: Arc<Mutex<State>>) -> Result<Output, Error> {
    let url = { state.lock().await.url.clone() };
    let response = http_get(&url).await?;
    state.lock().await.last_response = Some(response.clone());
    Ok(response.into())
}
}

Now the lock is held only for the data extraction and the data update, both of which are fast. The slow part — the HTTP call — happens with no lock held.

The compiler isn’t going to tell you which version is right. They both compile. But if you find yourself reaching for tokio::sync::Mutex because you “need to await while holding the lock,” ask whether the await is something that should be inside the critical section. Almost always, it isn’t.

Sharing a RefCell across threads

You can’t. The compiler stops you because RefCell is !Sync. People sometimes “fix” this by reaching for unsafe impl Sync for MyType {} or by wrapping in Arc<Mutex<RefCell<T>>> (which is silly — the Mutex is already an interior mutability primitive).

The compiler is right. RefCell does runtime borrow checking via a counter that is not atomic. If you share a RefCell across threads, two threads can both increment the borrow counter, both think they have unique access, and both write through &mut simultaneously. This is a data race. It is undefined behavior.

The fix is Mutex (for blocking threads) or RwLock (for many-reader, few-writer) or one of the lock-free alternatives. Not unsafe impl Sync on RefCell.

This shows up most often in async code where someone wants Rc<RefCell<T>> (because it’s cheaper than Arc<Mutex<T>>) and then tries to spawn the future. The error is real. Use Arc<Mutex<T>> or restructure so the state stays thread-local (tokio::task::LocalSet).

Returning a reference to a local

#![allow(unused)]
fn main() {
fn build() -> &str {
    let s = String::from("hello");
    &s
}
}

The compiler rejects this with E0515. People sometimes try to “fix” it with lifetime tricks or by leaking the String to make it 'static. Both are wrong answers. The right answer is: return ownership.

#![allow(unused)]
fn main() {
fn build() -> String {
    String::from("hello")
}
}

The compiler is enforcing that you don’t return a dangling reference. The reason this case “feels obvious” is that it’s the simple version of a problem that, in larger programs, isn’t obvious — you build up a graph of references, return one, and somewhere down the call chain a value goes out of scope and your “valid” reference is dangling. The borrow checker stops the simple case so the complex case doesn’t compile either.

In C, returning a pointer to a local is one of the most common sources of crashes. The fact that Rust catches it at compile time, rather than via valgrind in production, is the entire selling proposition.

Cyclic data structures

Try to write a doubly-linked list in safe Rust. Try to write a graph with bidirectional edges. The borrow checker will reject every attempt.

The reason: cyclic ownership doesn’t work. If A owns B and B owns A, neither can be dropped because each requires the other to be dropped first. Rust’s affine type system doesn’t allow this.

You have three options:

  1. Use Rc<RefCell<T>> cycles. Works, but creates memory leaks (refcount cycles aren’t collected). Use Weak references for the back-edges to break the cycle.
  2. Use index-based graphs. Store nodes in a Vec<Node> and refer to them by usize indices. The graph is an explicit data structure, not an implicit reference web. This is the conventional answer.
  3. Use unsafe. Carefully. Like std::collections::LinkedList does. With Miri verification.

For 95% of real graph problems, option 2 is the right answer. The cases where it isn’t are mostly performance-critical data structures (intrusive lists, lock-free structures), where you fall back to option 3.

The compiler is, here, telling you that the data structure you wanted is not free in any language; it just hides the cost in other languages. In a GC’d language, the cycles are a real problem too, just deferred to the runtime. In Rust, the cost is moved to compile time and made explicit.

Variance preventing aliased mutability

Recall the example from chapter 2:

#![allow(unused)]
fn main() {
fn assign<T>(input: &mut T, val: T) { *input = val; }

let mut hello: &'static str = "hello";
{
    let world = String::from("world");
    assign(&mut hello, &world);  // ERROR
}
println!("{hello}");
}

The compiler rejects this on lifetime grounds. The underlying reason is variance: &mut T is invariant in T, so &mut &'static str cannot be coerced to &mut &'short str. Without that invariance, assign would assign a non-'static reference into a slot typed as 'static, and then we’d read from that slot after the non-'static source went out of scope.

This is the variance rule doing real work. The error message is unhelpful, but the underlying check is preventing a use-after-free. If you find yourself fighting an invariance error, the question to ask is: would the operation, if allowed, let me write data with a shorter lifetime into a slot expecting a longer one? If yes, the compiler is right.

When to suspect the compiler is wrong vs. right

Heuristics for telling these cases apart:

Suspect the compiler is right when:

  • The error involves multi-threading, async, or shared mutable state.
  • The error is about lifetime relations between values that came from different scopes.
  • The error mentions Send or Sync and you’re trying to bypass them with unsafe impl.
  • Your fix is “wrap in Box::leak to make it 'static” (this leaks memory; if you can afford to leak, you can afford to clone and own).
  • You’re tempted to use raw pointers or transmute to make a borrow checker error go away.

Suspect the compiler is over-approximating when:

  • The error is about a self-contained data structure where the references genuinely don’t escape (e.g., split-borrows where the compiler can’t see disjointness).
  • The error is about HRTB inference that’s been wrong before and gotten fixed in later compiler versions.
  • The error involves a closure capturing something that genuinely doesn’t need to be captured.
  • You can articulate the invariant that the code maintains, and the invariant doesn’t depend on global program reasoning.

In the first set of cases, fix the code, not the compiler. In the second set, file an issue or use a workaround (split_at_mut, explicit lifetime annotations, Box::pin).

The discipline

The discipline of trusting the compiler when it pushes back is, paradoxically, what makes you faster at writing Rust over time. Engineers who reflexively reach for unsafe or transmute to silence the borrow checker spend more time debugging undefined behavior than they save by skipping the compiler’s complaints. Engineers who stop and ask “what is the compiler protecting me from” tend to design their data structures around the compiler’s grain, and the compiler stops complaining.

This isn’t about loving the borrow checker. It’s about recognizing that when the borrow checker rejects code, there is roughly a 70% chance the code has a real problem (and the borrow checker is the only one telling you), a 25% chance the code is fine but the compiler can’t prove it (and you need to restructure), and a 5% chance the compiler is genuinely over-conservative (and unsafe is justified). The next chapter is about that 5%.

Sources

  • The Rustonomicon’s section on aliasing makes the underlying soundness arguments precise.
  • The Tokio docs on shared state discuss the await-while-locked anti-pattern explicitly.
  • Aria Beingessner’s posts on the implementation of LinkedList and other intrusive data structures in std are the canonical examples of cyclic-structure-design done with unsafe.