Why Rust for the Backend?
You've heard the pitch. Memory safety without garbage collection. Fearless concurrency. Zero-cost abstractions. It compiles, therefore it works. Et cetera.
All of that is true, and none of it is why you should write your backend in Rust.
You should write your backend in Rust because the ecosystem has finally caught up to the language. For years, Rust had the best type system in production use and an async story that required reading several blog posts, a few RFCs, and at least one existential crisis to understand. That era is over. Tokio is stable and excellent. Axum is ergonomic and fast. SQLx gives you compile-time checked SQL queries. The toolchain is genuinely good.
This book is about building real backend services — HTTP APIs, database access, authentication, observability, deployment — using the current stable Rust ecosystem. We're not going to implement a toy JSON API that returns { "hello": "world" } and call it a day. We're going to build things that work in production.
What This Book Is Not
This is not a Rust tutorial. If you need to be told what a lifetime is, start with The Rust Programming Language and come back. We'll be here.
This is not a survey of every async runtime or HTTP framework. Tokio runs the world. Axum is built on Tower and Hyper, which are the best abstractions available. We're going to learn these tools deeply rather than skimming ten alternatives.
This is not a book about microservices architecture, Kubernetes, or why you should decompose your monolith. Those are fine topics. They're not this topic.
What This Book Is
A practical guide to the choices that actually matter:
- Which crate, and why — not "here are five options, pick one"
- What the type system is telling you — Axum's extractor pattern is elegant once you understand
FromRequest, confusing until you do - How things fail in production — error handling, connection pools, panics in async tasks
- How to know what's happening — structured logging, distributed tracing, metrics
- How to ship it — containers, static binaries, multi-stage Docker builds
The Stack
Throughout this book, we'll use:
| Concern | Crate | Version |
|---|---|---|
| Async runtime | tokio | 1.x |
| HTTP framework | axum | 0.7.x |
| Database | sqlx | 0.7.x |
| Middleware | tower / tower-http | 0.5.x / 0.5.x |
| Tracing | tracing / tracing-subscriber | 0.1.x |
| Error handling | thiserror / anyhow | 1.x |
| Configuration | config + dotenvy | 0.14.x / 0.15.x |
| JWT | jsonwebtoken | 9.x |
These are not the only options. They are good options, they work together, and they're what you'll encounter in real Rust backend codebases today.
A Note on Cargo Editions and MSRV
All examples in this book use the 2021 edition. Your Cargo.toml should have:
[package]
edition = "2021"
We'll target Rust 1.75+, which is when async fn in traits was stabilized. Some patterns in this book require it.
Getting Started
If you don't have Rust installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Then install the stable toolchain:
rustup default stable
rustup update
Verify:
rustc --version # rustc 1.75.0 or newer
cargo --version
We'll create a new project at the start of each major chapter. When you're ready to follow along for real, run:
cargo new --bin my-backend
cd my-backend
Now let's talk about async.
Async Rust: Tokio and the Runtime Under Your Feet
Let's talk about what async Rust is actually doing, because if you don't understand the model, you will eventually write a bug that is very difficult to debug and very embarrassing to explain.
The Model in One Paragraph
Rust's async/await is syntactic sugar over a state machine. When you write async fn, the compiler transforms it into a type that implements the Future trait. That future does nothing until something polls it. The thing that polls futures is the async runtime — in our case, Tokio. Tokio maintains a thread pool and an I/O event loop (backed by epoll/kqueue/IOCP depending on your OS), and it drives your futures forward whenever they're ready to make progress.
This is not the same as Go's goroutines or Python's asyncio event loop, though the surface syntax looks similar. The key difference: in Rust, futures are lazy and zero-cost. You can create a million futures without allocating a million stacks. They just sit there until polled.
Adding Tokio
[dependencies]
tokio = { version = "1", features = ["full"] }
The "full" feature flag enables everything: the multi-threaded scheduler, timers, I/O, channels, sync primitives, the works. For production you might slim this down, but for learning and most applications, full is fine.
The Simplest Possible Async Program
#[tokio::main] async fn main() { println!("Hello from async land"); }
The #[tokio::main] macro desugars to roughly:
fn main() { tokio::runtime::Builder::new_multi_thread() .enable_all() .build() .unwrap() .block_on(async { println!("Hello from async land"); }); }
block_on is the bridge between synchronous main and the async world. It starts the Tokio runtime and drives the future you give it until completion. Everything else happens inside that runtime.
Spawning Tasks
The async model shines when you have concurrent work. Tasks are Tokio's unit of concurrency:
use tokio::time::{sleep, Duration}; #[tokio::main] async fn main() { let handle = tokio::spawn(async { sleep(Duration::from_millis(100)).await; println!("Task completed"); 42_u32 }); // Do other work here while the task runs... println!("Doing other things"); let result = handle.await.expect("task panicked"); println!("Task returned: {result}"); }
tokio::spawn returns a JoinHandle<T>. Awaiting it gives you Result<T, JoinError> — the error case happens if the task panicked. Panics in spawned tasks don't propagate to the spawner automatically; they get captured in the JoinError. This is important: a panic in a spawned task will not crash your server unless you explicitly propagate it.
The Big Rule: Don't Block the Async Runtime
Here is the thing that will bite you:
#![allow(unused)] fn main() { // DON'T DO THIS async fn bad_handler() { std::thread::sleep(std::time::Duration::from_secs(5)); // ← BLOCKS // While this is sleeping, this thread cannot poll OTHER futures // Your server just stopped handling requests } }
Tokio's multi-threaded runtime has a fixed number of worker threads (defaulting to the number of CPU cores). If you block one of them with std::thread::sleep, a synchronous I/O call, or a long CPU-bound computation, you're stealing a thread from the runtime. Block all threads and your entire service stalls.
The solutions:
For async I/O (network, files, database): use async equivalents. tokio::time::sleep instead of std::thread::sleep. tokio::fs::read instead of std::fs::read.
For CPU-bound work: use tokio::task::spawn_blocking:
#![allow(unused)] fn main() { async fn compute_hash(data: Vec<u8>) -> String { // spawn_blocking runs the closure on a dedicated blocking thread pool // separate from Tokio's async worker threads tokio::task::spawn_blocking(move || { // expensive CPU work here, blocking is fine use sha2::{Sha256, Digest}; let mut hasher = Sha256::new(); hasher.update(&data); format!("{:x}", hasher.finalize()) }) .await .expect("blocking task panicked") } }
spawn_blocking uses a separate thread pool sized for blocking operations. The default limit is 512 threads — enough to handle many concurrent blocking operations without exhausting OS resources.
Channels: Communication Between Tasks
Tokio provides async-aware versions of the standard channel types:
use tokio::sync::mpsc; #[tokio::main] async fn main() { // Multi-producer, single-consumer channel let (tx, mut rx) = mpsc::channel::<String>(32); // buffer size 32 // Spawn a producer let tx_clone = tx.clone(); tokio::spawn(async move { for i in 0..5 { tx_clone.send(format!("message {i}")).await.unwrap(); } }); // Drop original sender so receiver knows when senders are gone drop(tx); // Consume messages while let Some(msg) = rx.recv().await { println!("Got: {msg}"); } println!("Channel closed"); }
When all senders are dropped, rx.recv() returns None. This is how you signal shutdown.
Tokio's other channel types:
| Type | Use case |
|---|---|
mpsc | Multiple producers, one consumer |
oneshot | Single value, single sender |
broadcast | Multiple producers, multiple consumers, every receiver gets every message |
watch | Broadcast the latest value (new receivers see current state) |
watch is particularly useful for configuration that can change at runtime, or for broadcasting shutdown signals:
use tokio::sync::watch; #[tokio::main] async fn main() { let (shutdown_tx, mut shutdown_rx) = watch::channel(false); tokio::spawn(async move { // Simulate shutdown signal after 1 second tokio::time::sleep(tokio::time::Duration::from_secs(1)).await; shutdown_tx.send(true).unwrap(); }); // Wait for shutdown signal loop { tokio::select! { _ = shutdown_rx.changed() => { if *shutdown_rx.borrow() { println!("Shutting down"); break; } } } } }
select!: Concurrency Within a Task
tokio::select! lets you race multiple futures and act on whichever completes first:
#![allow(unused)] fn main() { use tokio::time::{sleep, Duration}; async fn fetch_data() -> String { sleep(Duration::from_millis(200)).await; "data".to_string() } async fn fetch_with_timeout() -> Option<String> { tokio::select! { data = fetch_data() => Some(data), _ = sleep(Duration::from_millis(100)) => { println!("Timed out waiting for data"); None } } } }
The branches that don't win are cancelled. This is one of the places where Rust's cancellation model differs from most languages: cancellation is cooperative and happens at .await points. There's no forced thread interruption. This is safe, but it means you need to think about what state you leave behind when a future is dropped mid-execution.
Shared State with Mutex
When multiple tasks need mutable access to the same data:
#![allow(unused)] fn main() { use std::sync::Arc; use tokio::sync::Mutex; use std::collections::HashMap; type SharedCache = Arc<Mutex<HashMap<String, String>>>; async fn get_or_set(cache: SharedCache, key: String, value: String) -> String { let mut map = cache.lock().await; map.entry(key) .or_insert(value) .clone() } }
Note tokio::sync::Mutex, not std::sync::Mutex. The Tokio version is async-aware — it won't block the thread while waiting for the lock, it will yield back to the runtime. Use std::sync::Mutex only when you hold the lock for a very short time and never across an await point. (The borrow checker will stop you from accidentally doing the latter with std::sync::Mutex, which is a nice safety net.)
RwLock is available too, for read-heavy workloads where multiple concurrent readers should be allowed.
The Tokio Runtime Configuration
The default #[tokio::main] creates a multi-threaded runtime. You can configure it explicitly:
fn main() { tokio::runtime::Builder::new_multi_thread() .worker_threads(4) // Fix the thread count .max_blocking_threads(64) // Limit blocking thread pool .thread_name("my-runtime") // Useful for profiling .enable_all() .build() .expect("Failed to build runtime") .block_on(run()); } async fn run() { // your application here }
For tests and single-threaded applications, there's also new_current_thread(), which runs everything on a single thread and is deterministic. Useful for testing, bad for production.
What You Actually Need to Remember
- Futures do nothing until polled. The runtime polls them.
- Don't block worker threads — use
spawn_blockingfor CPU-bound or sync I/O work. - Panics in spawned tasks are captured in
JoinHandle. Handle them. tokio::select!races futures; cancelled branches stop at their next.await.- Use
tokio::sync::Mutexwhen you hold the lock across an.await. Usestd::sync::Mutexotherwise.
The next chapter builds an HTTP server on top of all of this. The async foundations matter there — every request is a task, every handler is an async function, and Axum's whole extractor machinery is built on Future.
HTTP Servers with Axum: Routing, Middleware, and State
Axum is an HTTP framework built on Tower and Hyper. You need to know this because it explains the design decisions that will otherwise seem arbitrary. Tower is a library for building composable network clients and servers via the Service trait. Hyper is a low-level HTTP implementation. Axum is the ergonomic layer on top.
This architecture means Axum has essentially no invented middleware system — it uses Tower middleware, which means it works with the entire Tower ecosystem out of the box. It also means that when you hit a type error involving BoxError or Service<Request>, you know where to look.
The Minimal Server
[dependencies]
tokio = { version = "1", features = ["full"] }
axum = "0.7"
use axum::{routing::get, Router}; #[tokio::main] async fn main() { let app = Router::new() .route("/", get(root_handler)) .route("/health", get(health_handler)); let listener = tokio::net::TcpListener::bind("0.0.0.0:3000") .await .unwrap(); println!("Listening on {}", listener.local_addr().unwrap()); axum::serve(listener, app).await.unwrap(); } async fn root_handler() -> &'static str { "Hello, world" } async fn health_handler() -> &'static str { "OK" }
Axum 0.7 moved to axum::serve with a TcpListener. You'll see older examples using axum::Server::bind — that's Axum 0.6 and it no longer compiles.
Extractors: How Axum Reads Requests
This is the part worth understanding properly. Every argument to a handler function is an extractor — a type that implements FromRequest or FromRequestParts. Axum calls .extract() on each one before invoking your handler.
#![allow(unused)] fn main() { use axum::{ extract::{Path, Query, State}, http::StatusCode, response::Json, routing::get, Router, }; use serde::{Deserialize, Serialize}; #[derive(Deserialize)] struct PaginationParams { page: Option<u32>, per_page: Option<u32>, } #[derive(Serialize)] struct User { id: u32, name: String, } async fn get_user( Path(user_id): Path<u32>, Query(pagination): Query<PaginationParams>, ) -> Result<Json<User>, StatusCode> { // Path extracts from the URL path: /users/:id // Query extracts from query string: ?page=1&per_page=20 if user_id == 0 { return Err(StatusCode::NOT_FOUND); } Ok(Json(User { id: user_id, name: "Alice".to_string(), })) } }
FromRequest vs FromRequestParts
This distinction matters for the body:
FromRequestParts— can be extracted from the request without consuming the body. Path parameters, query strings, headers, state.FromRequest— needs the entire request, including body. JSON bodies, form data, raw bytes.
Because the body can only be read once, only one FromRequest extractor is allowed per handler. It must be the last argument. The compiler will tell you if you violate this.
#![allow(unused)] fn main() { use axum::extract::Json as JsonExtractor; // GOOD: Json<T> (body extractor) is last async fn create_user( State(db): State<DbPool>, JsonExtractor(payload): JsonExtractor<CreateUserRequest>, ) -> StatusCode { // ... StatusCode::CREATED } // COMPILE ERROR: two body extractors // async fn bad_handler( // JsonExtractor(a): JsonExtractor<Foo>, // JsonExtractor(b): JsonExtractor<Bar>, // ← ERROR // ) {} }
Rejections
When extraction fails, Axum returns a rejection. The built-in extractors have typed rejections that implement IntoResponse. You don't usually need to handle them explicitly — they automatically return appropriate HTTP error responses (400 for bad JSON, 404 for missing path segments, etc.).
But you can handle them:
#![allow(unused)] fn main() { use axum::{ extract::rejection::JsonRejection, response::{IntoResponse, Response}, }; async fn strict_handler( payload: Result<Json<MyRequest>, JsonRejection>, ) -> Response { match payload { Ok(Json(req)) => { // process req StatusCode::OK.into_response() } Err(rejection) => { // Custom error response (StatusCode::BAD_REQUEST, rejection.body_text()).into_response() } } } }
Shared State
Handlers often need access to shared resources: a database pool, configuration, an HTTP client. Axum's answer is State<T>.
use axum::extract::State; use std::sync::Arc; // Your application state #[derive(Clone)] struct AppState { db: sqlx::PgPool, config: Arc<Config>, } async fn get_items(State(state): State<AppState>) -> Json<Vec<String>> { // state.db, state.config are available here Json(vec!["item1".to_string(), "item2".to_string()]) } #[tokio::main] async fn main() { let state = AppState { db: create_pool().await, config: Arc::new(Config::from_env()), }; let app = Router::new() .route("/items", get(get_items)) .with_state(state); // ... }
AppState must implement Clone. If you have non-Clone things in your state, wrap them in Arc. The State extractor clones your state for each handler invocation — for things like database pools, cloning is cheap because the pool itself is reference-counted internally.
Multiple State Types
You can use FromRef to extract parts of your state:
#![allow(unused)] fn main() { use axum::extract::FromRef; #[derive(Clone)] struct AppState { db: sqlx::PgPool, jwt_secret: String, } // Allow handlers to extract just the pool impl FromRef<AppState> for sqlx::PgPool { fn from_ref(state: &AppState) -> Self { state.db.clone() } } // This handler only needs the pool async fn list_users(State(pool): State<sqlx::PgPool>) -> Json<Vec<User>> { // ... todo!() } }
Responses
Anything that implements IntoResponse can be returned from a handler. The standard types cover most cases:
#![allow(unused)] fn main() { use axum::{ http::{StatusCode, HeaderMap}, response::{Html, IntoResponse, Json, Redirect, Response}, }; // Tuple responses: (status, body) or (status, headers, body) async fn created() -> impl IntoResponse { (StatusCode::CREATED, Json(serde_json::json!({"id": 1}))) } async fn with_headers() -> impl IntoResponse { let mut headers = HeaderMap::new(); headers.insert("X-Request-Id", "abc123".parse().unwrap()); (StatusCode::OK, headers, "response body") } async fn redirect() -> Redirect { Redirect::to("/new-location") } async fn html_page() -> Html<String> { Html("<h1>Hello</h1>".to_string()) } }
Custom Response Types
Implement IntoResponse for your domain types:
#![allow(unused)] fn main() { use axum::response::{IntoResponse, Response}; use axum::http::StatusCode; use axum::Json; enum ApiError { NotFound(String), Unauthorized, Internal(anyhow::Error), } impl IntoResponse for ApiError { fn into_response(self) -> Response { let (status, message) = match self { ApiError::NotFound(msg) => (StatusCode::NOT_FOUND, msg), ApiError::Unauthorized => ( StatusCode::UNAUTHORIZED, "Unauthorized".to_string(), ), ApiError::Internal(err) => { tracing::error!("Internal error: {err:#}"); ( StatusCode::INTERNAL_SERVER_ERROR, "Internal server error".to_string(), ) } }; (status, Json(serde_json::json!({"error": message}))).into_response() } } // Now handlers can return Result<T, ApiError> async fn get_user_by_id( Path(id): Path<u32>, ) -> Result<Json<User>, ApiError> { if id == 0 { return Err(ApiError::NotFound(format!("User {id} not found"))); } Ok(Json(User { id, name: "Alice".to_string() })) } }
Middleware
Tower middleware wraps services. Axum provides a layer method to attach Tower layers to your router.
The tower-http Crate
The tower-http crate provides the middleware you'll actually use:
[dependencies]
tower-http = { version = "0.5", features = [
"cors",
"compression-gzip",
"request-id",
"trace",
"timeout",
] }
tower = { version = "0.4", features = ["util"] }
#![allow(unused)] fn main() { use axum::Router; use tower_http::{ compression::CompressionLayer, cors::{Any, CorsLayer}, request_id::{MakeRequestUuid, SetRequestIdLayer, PropagateRequestIdLayer}, timeout::TimeoutLayer, trace::TraceLayer, }; use std::time::Duration; fn build_router() -> Router { Router::new() .route("/", get(root_handler)) // Add middleware in reverse order of execution // (last added = outermost = first to run) .layer(TraceLayer::new_for_http()) .layer(CompressionLayer::new()) .layer(TimeoutLayer::new(Duration::from_secs(30))) .layer( CorsLayer::new() .allow_origin(Any) .allow_methods(Any) .allow_headers(Any), ) .layer(SetRequestIdLayer::x_request_id(MakeRequestUuid)) .layer(PropagateRequestIdLayer::x_request_id()) } }
Layer ordering matters. Layers are applied from the bottom of the stack up, wrapping the router. The outermost layer runs first. A common gotcha: if you put the timeout inside the trace layer, timeouts won't show up in your trace spans. Put the trace layer outermost.
Custom Middleware with axum::middleware::from_fn
For middleware that needs access to Axum-specific things (like extractors), use from_fn:
#![allow(unused)] fn main() { use axum::{ extract::Request, middleware::{self, Next}, response::Response, http::StatusCode, }; async fn require_api_key( req: Request, next: Next, ) -> Result<Response, StatusCode> { let key = req .headers() .get("X-Api-Key") .and_then(|v| v.to_str().ok()); if key != Some("secret-key") { return Err(StatusCode::UNAUTHORIZED); } Ok(next.run(req).await) } fn secured_router() -> Router { Router::new() .route("/secure", get(secure_handler)) .layer(middleware::from_fn(require_api_key)) } }
For middleware that needs state:
#![allow(unused)] fn main() { async fn auth_middleware( State(state): State<AppState>, req: Request, next: Next, ) -> Result<Response, StatusCode> { // state is available here Ok(next.run(req).await) } fn secured_router_with_state(state: AppState) -> Router { Router::new() .route("/secure", get(secure_handler)) .route_layer(middleware::from_fn_with_state(state.clone(), auth_middleware)) .with_state(state) } }
Note route_layer vs layer — route_layer only applies to routes, not to 404s. This matters for middleware that shouldn't run on every request (like authentication middleware that would otherwise log 404 attempts as auth failures).
Routing
Axum's router supports all the HTTP methods and path parameters you'd expect:
#![allow(unused)] fn main() { use axum::routing::{delete, get, patch, post, put}; let app = Router::new() // Static routes .route("/", get(index)) // Path parameters .route("/users/:id", get(get_user)) // Multiple methods on one route .route("/users", get(list_users).post(create_user)) // Nested routes .nest("/api/v1", api_v1_router()) // Catch-all (must be last) .route("/static/*path", get(serve_static)); }
Router Merging and Nesting
#![allow(unused)] fn main() { fn api_v1_router() -> Router<AppState> { Router::new() .route("/users", get(list_users)) .route("/users/:id", get(get_user).put(update_user).delete(delete_user)) } fn app(state: AppState) -> Router { Router::new() .nest("/api/v1", api_v1_router()) .nest("/api/v2", api_v2_router()) .with_state(state) } }
Notice Router<AppState> — when you split your router into functions, you need to thread the state type parameter. If this gets annoying, use Router<()> and pass state only at the top level with with_state. You'll need FromRef for sub-extractors.
Graceful Shutdown
Production servers need to stop cleanly when they receive SIGTERM:
use tokio::signal; #[tokio::main] async fn main() { let app = build_router(); let listener = tokio::net::TcpListener::bind("0.0.0.0:3000") .await .unwrap(); axum::serve(listener, app) .with_graceful_shutdown(shutdown_signal()) .await .unwrap(); } async fn shutdown_signal() { let ctrl_c = async { signal::ctrl_c() .await .expect("Failed to install Ctrl+C handler"); }; #[cfg(unix)] let terminate = async { signal::unix::signal(signal::unix::SignalKind::terminate()) .expect("Failed to install SIGTERM handler") .recv() .await; }; #[cfg(not(unix))] let terminate = std::future::pending::<()>(); tokio::select! { _ = ctrl_c => {}, _ = terminate => {}, } println!("Shutdown signal received"); }
When the shutdown future resolves, Axum stops accepting new connections and waits for in-flight requests to complete. If you have a timeout budget for graceful shutdown (e.g., 30 seconds in Kubernetes), you'll need to combine this with a timeout on the serve call itself.
Putting It Together
A complete, production-shaped router:
#![allow(unused)] fn main() { use axum::{ extract::{Path, State}, http::StatusCode, response::Json, routing::{get, post}, Router, }; use std::sync::Arc; use tower_http::{timeout::TimeoutLayer, trace::TraceLayer}; use std::time::Duration; #[derive(Clone)] struct AppState { // We'll fill this in the next chapter db: Arc<()>, } pub fn create_app(state: AppState) -> Router { let api = Router::new() .route("/users", get(list_users).post(create_user)) .route("/users/:id", get(get_user)); Router::new() .route("/health", get(health_check)) .nest("/api/v1", api) .layer(TraceLayer::new_for_http()) .layer(TimeoutLayer::new(Duration::from_secs(30))) .with_state(state) } async fn health_check() -> StatusCode { StatusCode::OK } async fn list_users(State(_state): State<AppState>) -> Json<Vec<serde_json::Value>> { Json(vec![]) } async fn create_user( State(_state): State<AppState>, Json(body): Json<serde_json::Value>, ) -> StatusCode { StatusCode::CREATED } async fn get_user( State(_state): State<AppState>, Path(id): Path<u32>, ) -> Result<Json<serde_json::Value>, StatusCode> { Err(StatusCode::NOT_FOUND) } }
In the next chapter we'll replace that Arc<()> with a real database pool.
Databases with SQLx: Async, Typed, and Honest
SQLx is the async database library for Rust. It's not an ORM — there's no query builder, no magical User::find_by_id() method, no DSL to learn. You write SQL. SQLx runs it and maps the results to Rust types. If your SQL is wrong, the compiler tells you at compile time. This is the correct trade-off.
The headline feature is compile-time query verification: the query! macro connects to your database during compilation, sends the query to the server for parsing and type analysis, and uses the returned metadata to verify that your Rust code handles the result types correctly. No more "column 'user_id' does not exist" errors at 3am.
Setup
[dependencies]
sqlx = { version = "0.7", features = [
"runtime-tokio-rustls",
"postgres", # or "sqlite", "mysql"
"macros", # for query!, query_as!, etc.
"migrate", # for migrations
"uuid", # map UUID columns to uuid::Uuid
"time", # map timestamp columns to time::OffsetDateTime
"json", # map jsonb columns to serde_json::Value
] }
uuid = { version = "1", features = ["v4"] }
time = { version = "0.3", features = ["serde"] }
The runtime-tokio-rustls feature uses Tokio as the async runtime and rustls for TLS (no OpenSSL dependency needed). If you need native-tls, swap accordingly.
Connection Pools
You never want a single database connection in a production service. You want a pool:
#![allow(unused)] fn main() { use sqlx::postgres::PgPoolOptions; use sqlx::PgPool; pub async fn create_pool(database_url: &str) -> Result<PgPool, sqlx::Error> { PgPoolOptions::new() .max_connections(20) .min_connections(2) .acquire_timeout(std::time::Duration::from_secs(5)) .idle_timeout(std::time::Duration::from_secs(600)) .max_lifetime(std::time::Duration::from_secs(1800)) .connect(database_url) .await } }
What these options mean:
max_connections(20): maximum pool size. Size this based on your database server'smax_connectionslimit, divided across your service instances. Postgres default is 100; if you run 5 instances with pools of 20, you're using 100 connections. That math matters.min_connections(2): keep at least 2 connections warm. Cold connections have latency.acquire_timeout: how long to wait for a connection from the pool before failing. Without this, a slow query cascade can cause all incoming requests to pile up waiting for connections forever.idle_timeout/max_lifetime: recycle connections that might be in a bad state.
PgPool is Clone, and cloning it is cheap — it clones a reference to the same pool. Pass it around by cloning.
The query! Macro
#![allow(unused)] fn main() { use sqlx::PgPool; pub async fn get_user_by_id(pool: &PgPool, id: i64) -> Result<Option<User>, sqlx::Error> { // The query! macro: // 1. Connects to DATABASE_URL at compile time // 2. Sends this query to Postgres for analysis // 3. Infers the return type of each column // 4. Returns an anonymous struct with typed fields let row = sqlx::query!( "SELECT id, email, name, created_at FROM users WHERE id = $1", id ) .fetch_optional(pool) .await?; Ok(row.map(|r| User { id: r.id, email: r.email, name: r.name, created_at: r.created_at, })) } }
The result of query! is an anonymous struct whose fields match your SELECT columns. The types come from Postgres: BIGINT → i64, TEXT → String, TIMESTAMPTZ → time::OffsetDateTime (with the time feature), UUID → uuid::Uuid.
Nullable columns become Option<T>. If Postgres says a column can be NULL, the field is optional. If it can't be NULL, it's not. The compiler enforces this, which means you can't accidentally treat a nullable column as definitely-present.
query_as! for Existing Structs
More often, you have a struct you want to map to:
#![allow(unused)] fn main() { use serde::{Deserialize, Serialize}; use time::OffsetDateTime; use uuid::Uuid; #[derive(Debug, Serialize, Deserialize, sqlx::FromRow)] pub struct User { pub id: Uuid, pub email: String, pub name: String, pub created_at: OffsetDateTime, pub updated_at: OffsetDateTime, } pub async fn get_user(pool: &PgPool, id: Uuid) -> Result<Option<User>, sqlx::Error> { sqlx::query_as!( User, "SELECT id, email, name, created_at, updated_at FROM users WHERE id = $1", id ) .fetch_optional(pool) .await } }
query_as! maps the result set directly to your struct. The field names and types must match (you'll get a compile error if they don't, once the macro verifies against the database schema).
Fetch Methods
| Method | Returns | Use when |
|---|---|---|
.fetch_one(pool) | Result<Row, Error> | Exactly one row expected; error if zero or multiple |
.fetch_optional(pool) | Result<Option<Row>, Error> | Zero or one row |
.fetch_all(pool) | Result<Vec<Row>, Error> | All rows |
.fetch(pool) | Stream<Item = Result<Row, Error>> | Streaming large result sets |
.execute(pool) | Result<PgQueryResult, Error> | INSERT/UPDATE/DELETE |
For large result sets, .fetch() returns a stream that yields rows one at a time, avoiding loading everything into memory:
#![allow(unused)] fn main() { use futures::TryStreamExt; pub async fn process_all_users(pool: &PgPool) -> Result<(), sqlx::Error> { let mut rows = sqlx::query_as!(User, "SELECT id, email, name, created_at, updated_at FROM users") .fetch(pool); while let Some(user) = rows.try_next().await? { process_user(user).await; } Ok(()) } }
Transactions
Transactions in SQLx are straightforward:
#![allow(unused)] fn main() { pub async fn transfer_credits( pool: &PgPool, from_user: Uuid, to_user: Uuid, amount: i64, ) -> Result<(), sqlx::Error> { let mut tx = pool.begin().await?; let balance = sqlx::query_scalar!( "SELECT balance FROM accounts WHERE user_id = $1 FOR UPDATE", from_user ) .fetch_one(&mut *tx) .await?; if balance < amount { // tx is dropped here, which rolls back the transaction return Err(sqlx::Error::RowNotFound); // or your own error type } sqlx::query!( "UPDATE accounts SET balance = balance - $1 WHERE user_id = $2", amount, from_user ) .execute(&mut *tx) .await?; sqlx::query!( "UPDATE accounts SET balance = balance + $1 WHERE user_id = $2", amount, to_user ) .execute(&mut *tx) .await?; tx.commit().await?; Ok(()) } }
When tx is dropped without commit(), SQLx rolls back. This is the correct default. Note &mut *tx — you dereference the transaction to get to the underlying connection.
Migrations
SQLx has a migration system built in. Migrations are numbered SQL files:
migrations/
20240101000000_create_users.sql
20240102000000_add_email_index.sql
Each file:
-- 20240101000000_create_users.sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
password_hash TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX users_email_idx ON users (email);
Install the CLI:
cargo install sqlx-cli --no-default-features --features rustls,postgres
Run migrations:
export DATABASE_URL="postgres://localhost/myapp"
sqlx migrate run
Or run them programmatically at startup:
pub async fn run_migrations(pool: &PgPool) -> Result<(), sqlx::migrate::MigrateError> { sqlx::migrate!("./migrations").run(pool).await } #[tokio::main] async fn main() { let pool = create_pool(&std::env::var("DATABASE_URL").unwrap()).await.unwrap(); run_migrations(&pool).await.expect("Migration failed"); // start server... }
sqlx::migrate! includes the SQL files in your binary at compile time. You get a self-contained executable that can migrate its own database on startup, no external tools needed.
Compile-Time Verification Setup
For the query! macros to work, you need DATABASE_URL set when you run cargo build or cargo check:
export DATABASE_URL="postgres://postgres:password@localhost/myapp_dev"
cargo build
For CI and environments without a database, SQLx has an offline mode. After setting up your database and running:
cargo sqlx prepare
SQLx writes query metadata to a .sqlx/ directory that you commit to the repository. CI can then build without a live database:
SQLX_OFFLINE=true cargo build
This is the standard pattern: offline data in CI, live database in development.
Integrating with Axum
Here's a complete CRUD example for users:
#![allow(unused)] fn main() { use axum::{ extract::{Path, State}, http::StatusCode, response::Json, routing::{get, post}, Router, }; use serde::{Deserialize, Serialize}; use sqlx::PgPool; use uuid::Uuid; use time::OffsetDateTime; #[derive(Debug, Serialize, sqlx::FromRow)] pub struct User { pub id: Uuid, pub email: String, pub name: String, pub created_at: OffsetDateTime, } #[derive(Deserialize)] pub struct CreateUser { pub email: String, pub name: String, } pub fn user_router() -> Router<PgPool> { Router::new() .route("/users", get(list_users).post(create_user)) .route("/users/:id", get(get_user)) } async fn list_users(State(pool): State<PgPool>) -> Result<Json<Vec<User>>, StatusCode> { sqlx::query_as!( User, "SELECT id, email, name, created_at FROM users ORDER BY created_at DESC" ) .fetch_all(&pool) .await .map(Json) .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR) } async fn get_user( State(pool): State<PgPool>, Path(id): Path<Uuid>, ) -> Result<Json<User>, StatusCode> { sqlx::query_as!( User, "SELECT id, email, name, created_at FROM users WHERE id = $1", id ) .fetch_optional(&pool) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)? .map(Json) .ok_or(StatusCode::NOT_FOUND) } async fn create_user( State(pool): State<PgPool>, Json(payload): Json<CreateUser>, ) -> Result<(StatusCode, Json<User>), StatusCode> { let user = sqlx::query_as!( User, r#" INSERT INTO users (email, name) VALUES ($1, $2) RETURNING id, email, name, created_at "#, payload.email, payload.name ) .fetch_one(&pool) .await .map_err(|e| { if let sqlx::Error::Database(db_err) = &e { // PostgreSQL unique constraint violation if db_err.constraint().is_some() { return StatusCode::CONFLICT; } } StatusCode::INTERNAL_SERVER_ERROR })?; Ok((StatusCode::CREATED, Json(user))) } }
Note the RETURNING clause in the INSERT — this is standard PostgreSQL and lets you get the full created row without a second query. SQLx handles it cleanly with fetch_one.
What Goes Wrong
DATABASE_URL not set: The query! macro fails at compile time with a confusing error. Set SQLX_OFFLINE=true or set the URL.
Connection pool exhausted: All max_connections are in use and acquire_timeout triggers. Your request fails with a timeout error. Monitor pool utilization. Consider whether slow queries are holding connections for too long.
Schema drift: Your Rust types no longer match the actual database schema. Run migrations. The compile-time checking only works against the schema that existed when you last ran cargo sqlx prepare or compiled against a live database.
Missing sqlx::FromRow derive: query_as! requires the target struct to implement FromRow, or you need to use the query! macro and map manually.
The next chapter covers authentication — including storing and verifying password hashes, which is the first place you'll need to reach for spawn_blocking.
Authentication and Authorization: JWTs, Sessions, and Not Screwing It Up
Authentication is the part of your application where the mistakes are both easiest to make and most consequential. This chapter covers the patterns that work, explains why the naive implementations fail, and is direct about the trade-offs.
Passwords: Hash Them Correctly
The only acceptable password hashing algorithms for new code are Argon2, bcrypt, and scrypt. Do not use SHA-256. Do not use SHA-512. Do not use MD5. These are not password hashing algorithms; they're message digest functions designed to be fast. Fast is the enemy of password security.
Argon2id is the current recommendation. Use the argon2 crate:
[dependencies]
argon2 = "0.5"
password-hash = "0.5"
rand_core = { version = "0.6", features = ["getrandom"] }
#![allow(unused)] fn main() { use argon2::{ password_hash::{rand_core::OsRng, PasswordHash, PasswordHasher, PasswordVerifier, SaltString}, Argon2, }; pub fn hash_password(password: &str) -> Result<String, argon2::password_hash::Error> { let salt = SaltString::generate(&mut OsRng); let argon2 = Argon2::default(); let hash = argon2.hash_password(password.as_bytes(), &salt)?; Ok(hash.to_string()) } pub fn verify_password( password: &str, hash: &str, ) -> Result<bool, argon2::password_hash::Error> { let parsed_hash = PasswordHash::new(hash)?; Ok(Argon2::default().verify_password(password.as_bytes(), &parsed_hash).is_ok()) } }
Argon2 is intentionally slow — that's the feature. On modern hardware, a single hash might take 50-100ms with default parameters. That means you must run it in spawn_blocking:
#![allow(unused)] fn main() { use tokio::task; pub async fn hash_password_async(password: String) -> Result<String, AppError> { task::spawn_blocking(move || hash_password(&password)) .await .map_err(|_| AppError::Internal("Password hashing task panicked".into()))? .map_err(|_| AppError::Internal("Password hashing failed".into())) } pub async fn verify_password_async(password: String, hash: String) -> Result<bool, AppError> { task::spawn_blocking(move || verify_password(&password, &hash)) .await .map_err(|_| AppError::Internal("Verification task panicked".into()))? .map_err(|_| AppError::Internal("Verification failed".into())) } }
Calling hash_password directly in an async handler will block the Tokio worker thread for the duration of the hash operation. With default Argon2 parameters, you'd cap your authentication throughput at roughly 10-20 requests/second per thread. This is the kind of bug that looks like an infrastructure problem until you profile it.
JSON Web Tokens
JWTs are widely used and widely misunderstood. A JWT is three base64-encoded pieces separated by dots: a header, a payload (claims), and a signature. The server creates the token and signs it. Clients present the token on subsequent requests. The server verifies the signature — if valid, the claims are trusted without a database lookup.
[dependencies]
jsonwebtoken = "9"
serde = { version = "1", features = ["derive"] }
#![allow(unused)] fn main() { use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation}; use serde::{Deserialize, Serialize}; use std::time::{SystemTime, UNIX_EPOCH}; #[derive(Debug, Serialize, Deserialize)] pub struct Claims { pub sub: String, // Subject (user ID) pub exp: u64, // Expiration timestamp pub iat: u64, // Issued at pub jti: String, // JWT ID (for revocation) } pub struct JwtConfig { encoding_key: EncodingKey, decoding_key: DecodingKey, expiration_seconds: u64, } impl JwtConfig { pub fn new(secret: &[u8], expiration_seconds: u64) -> Self { Self { encoding_key: EncodingKey::from_secret(secret), decoding_key: DecodingKey::from_secret(secret), expiration_seconds, } } pub fn issue(&self, user_id: &str) -> Result<String, jsonwebtoken::errors::Error> { let now = SystemTime::now() .duration_since(UNIX_EPOCH) .unwrap() .as_secs(); let claims = Claims { sub: user_id.to_string(), exp: now + self.expiration_seconds, iat: now, jti: uuid::Uuid::new_v4().to_string(), }; encode(&Header::default(), &claims, &self.encoding_key) } pub fn verify(&self, token: &str) -> Result<Claims, jsonwebtoken::errors::Error> { let mut validation = Validation::default(); validation.validate_exp = true; // Enforces expiration decode::<Claims>(token, &self.decoding_key, &validation) .map(|data| data.claims) } } }
The jti (JWT ID) field is important for revocation. JWTs are stateless by design, which means you can't "invalidate" one without either a blocklist or short expiration. A jti lets you maintain a blocklist of revoked tokens — useful for logout and password change scenarios.
JWT Extraction as Axum Middleware
Rather than extracting the JWT in every handler, create an extractor:
#![allow(unused)] fn main() { use axum::{ async_trait, extract::FromRequestParts, http::{request::Parts, StatusCode}, RequestPartsExt, }; use axum_extra::{ headers::{authorization::Bearer, Authorization}, TypedHeader, }; // Requires: axum-extra = { version = "0.9", features = ["typed-header"] } #[derive(Debug, Clone)] pub struct AuthenticatedUser { pub user_id: String, pub claims: Claims, } #[async_trait] impl<S> FromRequestParts<S> for AuthenticatedUser where S: Send + Sync, JwtConfig: axum::extract::FromRef<S>, { type Rejection = (StatusCode, &'static str); async fn from_request_parts(parts: &mut Parts, state: &S) -> Result<Self, Self::Rejection> { let TypedHeader(Authorization(bearer)) = parts .extract::<TypedHeader<Authorization<Bearer>>>() .await .map_err(|_| (StatusCode::UNAUTHORIZED, "Missing or invalid Authorization header"))?; let jwt_config = JwtConfig::from_ref(state); let claims = jwt_config .verify(bearer.token()) .map_err(|_| (StatusCode::UNAUTHORIZED, "Invalid or expired token"))?; Ok(AuthenticatedUser { user_id: claims.sub.clone(), claims, }) } } }
Now any handler that needs authentication just includes AuthenticatedUser as an argument:
#![allow(unused)] fn main() { async fn get_current_user( auth: AuthenticatedUser, State(pool): State<PgPool>, ) -> Result<Json<User>, StatusCode> { get_user_by_id(&pool, &auth.user_id) .await .map(Json) .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)? .ok_or(StatusCode::NOT_FOUND) } }
If the token is missing, expired, or invalid, Axum calls the Rejection before the handler runs. Clean, composable, and impossible to forget.
The Login Endpoint
Putting it together:
#![allow(unused)] fn main() { use axum::{ extract::State, http::StatusCode, response::Json, }; use serde::{Deserialize, Serialize}; use sqlx::PgPool; #[derive(Deserialize)] pub struct LoginRequest { email: String, password: String, } #[derive(Serialize)] pub struct LoginResponse { access_token: String, token_type: String, expires_in: u64, } pub async fn login( State(pool): State<PgPool>, State(jwt): State<JwtConfig>, Json(payload): Json<LoginRequest>, ) -> Result<Json<LoginResponse>, StatusCode> { // Fetch user — always do this before password check // to prevent username enumeration via timing differences let user = sqlx::query!( "SELECT id, password_hash FROM users WHERE email = $1", payload.email ) .fetch_optional(&pool) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; // Always verify a password hash, even if user not found. // This prevents timing-based user enumeration. let (user_id, stored_hash) = match user { Some(u) => (u.id.to_string(), u.password_hash), None => { // Verify against a dummy hash to burn consistent time let _ = verify_password_async( payload.password, "$argon2id$v=19$m=19456,t=2,p=1$AAAAAAAAAAAAAAAA$AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA".to_string() ).await; return Err(StatusCode::UNAUTHORIZED); } }; let password_valid = verify_password_async(payload.password, stored_hash) .await .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; if !password_valid { return Err(StatusCode::UNAUTHORIZED); } let token = jwt.issue(&user_id) .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; Ok(Json(LoginResponse { access_token: token, token_type: "Bearer".to_string(), expires_in: 3600, })) } }
The timing-equalization for unknown users is worth noting. If you return immediately when a user doesn't exist, attackers can enumerate valid email addresses by measuring response times. Always do the slow work (password verification) even on the failure path.
Role-Based Authorization
Once authentication works, authorization is a matter of attaching roles or permissions to your JWT claims and checking them:
#![allow(unused)] fn main() { #[derive(Debug, Serialize, Deserialize)] pub struct Claims { pub sub: String, pub exp: u64, pub iat: u64, pub jti: String, pub roles: Vec<String>, // Add roles to claims } // A typed extractor that requires a specific role pub struct RequireRole(pub AuthenticatedUser); impl RequireRole { pub fn require(role: &str) -> impl Fn(AuthenticatedUser) -> Result<Self, (StatusCode, &'static str)> + Clone + '_ { move |auth: AuthenticatedUser| { if auth.claims.roles.contains(&role.to_string()) { Ok(RequireRole(auth)) } else { Err((StatusCode::FORBIDDEN, "Insufficient permissions")) } } } } // Handler that requires admin role via middleware async fn admin_action( auth: AuthenticatedUser, ) -> Result<Json<serde_json::Value>, StatusCode> { if !auth.claims.roles.contains(&"admin".to_string()) { return Err(StatusCode::FORBIDDEN); } Ok(Json(serde_json::json!({"status": "ok"}))) } }
For more sophisticated authorization, look at extracting a Permission type from your claims and using FromRequestParts to check it — the same pattern as AuthenticatedUser but with a required permission parameter.
Sessions vs JWTs
JWTs are stateless. Sessions are stateful. This trade-off is real:
| JWTs | Sessions | |
|---|---|---|
| Database lookup per request | No | Yes |
| Revocation | Blocklist or short expiry | Delete session row |
| Horizontal scaling | Easy (no shared state needed) | Requires shared session store (Redis) |
| Token size | ~500-1000 bytes | ~32 bytes (session ID) |
| Claims visibility | Visible to client (base64) | Opaque |
For APIs consumed by mobile or single-page apps, JWTs are the standard choice. For traditional web apps with server-rendered pages, sessions backed by a database or Redis are simpler to reason about.
If you implement sessions, use tower-sessions with a store backend. Don't roll your own session ID generation — use cryptographically secure random bytes, not sequential IDs.
What to Never Do
- Never store plaintext passwords. Not even temporarily.
- Never use symmetric HMAC secrets in environment variables without rotation. Rotate them. Have a plan for rotation.
- Never log JWT tokens. They're credentials. Treat them like passwords.
- Never trust JWT claims without signature verification. The base64 is not encryption; anyone can decode it.
- Never set
expto a year from now. Short expiration + refresh tokens is the pattern. 15 minutes for access tokens, days for refresh tokens. - Never put sensitive data in JWT claims. They're visible to the client. Put a user ID, not an SSN.
The next chapter is about what happens when these — or anything else — goes wrong.
Error Handling in Production: thiserror, anyhow, and When to Use Which
Rust's error handling is genuinely good. The Result<T, E> type forces you to handle errors at the type level, the ? operator propagates them concisely, and the ecosystem has standardized on traits that make errors composable. The tooling — thiserror and anyhow — is stable and excellent.
The confusion comes from having two crates that both deal with errors, used for different purposes. This chapter explains the distinction, which is simpler than most blog posts make it sound.
The Rule
thiserror — for library code and domain errors. Use it when the caller needs to match on the error type and do different things based on which error occurred.
anyhow — for application code. Use it when you just need to propagate errors and eventually log or return a 500.
A backend application uses both. Your database layer defines typed errors with thiserror. Your request handlers return anyhow::Result or your own anyhow-based error type and convert domain errors to HTTP responses.
thiserror: Typed Domain Errors
[dependencies]
thiserror = "1"
#![allow(unused)] fn main() { use thiserror::Error; #[derive(Debug, Error)] pub enum UserError { #[error("User not found: {id}")] NotFound { id: uuid::Uuid }, #[error("Email already registered: {email}")] EmailConflict { email: String }, #[error("Invalid password")] InvalidPassword, #[error("Account locked: too many failed attempts")] AccountLocked, #[error("Database error: {0}")] Database(#[from] sqlx::Error), } }
#[error("...")] implements std::fmt::Display. #[from] implements From<sqlx::Error> for UserError, so you can use ? on sqlx calls in functions that return Result<_, UserError>.
The #[from] attribute generates exactly one conversion. If you have two error types you want to convert from, you can only #[from] one of them per variant — the other you convert manually or via a separate From impl.
Matching Domain Errors in Handlers
#![allow(unused)] fn main() { async fn create_user_handler( State(pool): State<PgPool>, Json(payload): Json<CreateUser>, ) -> Result<(StatusCode, Json<User>), ApiError> { match create_user(&pool, payload).await { Ok(user) => Ok((StatusCode::CREATED, Json(user))), Err(UserError::EmailConflict { email }) => { Err(ApiError::Conflict(format!("Email already registered: {email}"))) } Err(UserError::Database(e)) => { tracing::error!("Database error creating user: {e}"); Err(ApiError::Internal) } Err(e) => { tracing::error!("Unexpected error: {e}"); Err(ApiError::Internal) } } } }
This is the value of typed errors: the handler can distinguish between "email conflict → 409" and "database error → 500" without stringly-typed error messages.
anyhow: Pragmatic Error Propagation
[dependencies]
anyhow = "1"
#![allow(unused)] fn main() { use anyhow::{Context, Result}; pub async fn startup_checks(config: &Config) -> Result<()> { // Context adds a message to the error chain let pool = create_pool(&config.database_url) .await .context("Failed to connect to database")?; sqlx::migrate!("./migrations") .run(&pool) .await .context("Failed to run database migrations")?; ping_cache(&config.redis_url) .await .context("Failed to connect to Redis")?; Ok(()) } }
anyhow::Error is a boxed dyn std::error::Error + Send + Sync. It erases the specific error type, which means you lose the ability to match on it — but you gain the ability to use ? with any error type, compose errors freely, and add context at each layer.
The Context trait (from anyhow) adds .context() and .with_context() to Result values. The context message is prepended to the error chain, which shows up when you print with {:#}:
Failed to run database migrations: error connecting to database: connection refused (os error 111)
Each context() call adds a layer to the error chain, making it clear where the error originated.
anyhow in Axum Handlers
anyhow::Error doesn't implement IntoResponse, so you can't return it directly from Axum handlers. You need a thin wrapper:
#![allow(unused)] fn main() { use anyhow::Error; use axum::http::StatusCode; use axum::response::{IntoResponse, Response}; /// Wraps anyhow::Error for use as an Axum response. /// Logs the error and returns a 500. pub struct ServerError(Error); impl IntoResponse for ServerError { fn into_response(self) -> Response { tracing::error!("Internal server error: {:#}", self.0); StatusCode::INTERNAL_SERVER_ERROR.into_response() } } // Allow using ? with anyhow::Error in handlers impl<E: Into<Error>> From<E> for ServerError { fn from(err: E) -> Self { Self(err.into()) } } // Usage: async fn my_handler(State(pool): State<PgPool>) -> Result<Json<Vec<User>>, ServerError> { let users = fetch_users(&pool) .await .context("Failed to fetch users")?; // ? converts to ServerError via From Ok(Json(users)) } }
This pattern is sometimes called the "anyhow handler" pattern. You get the convenience of anyhow for error propagation and the IntoResponse integration Axum needs.
A Complete Error Architecture
For a real application, here's a pattern that scales:
#![allow(unused)] fn main() { use axum::{http::StatusCode, response::{IntoResponse, Json, Response}}; use thiserror::Error; /// Domain errors — typed, matchable, returned from your business logic layer #[derive(Debug, Error)] pub enum AppError { // Auth errors #[error("Authentication required")] Unauthenticated, #[error("Insufficient permissions")] Forbidden, // Resource errors #[error("{resource} with id {id} not found")] NotFound { resource: &'static str, id: String }, #[error("{field} already exists: {value}")] Conflict { field: &'static str, value: String }, // Validation errors #[error("Validation failed: {0}")] Validation(String), // Infrastructure errors — wrap the low-level error #[error("Database error")] Database(#[from] sqlx::Error), #[error("Internal error")] Internal(#[from] anyhow::Error), } impl IntoResponse for AppError { fn into_response(self) -> Response { let (status, code, message) = match &self { AppError::Unauthenticated => ( StatusCode::UNAUTHORIZED, "UNAUTHENTICATED", self.to_string(), ), AppError::Forbidden => ( StatusCode::FORBIDDEN, "FORBIDDEN", self.to_string(), ), AppError::NotFound { .. } => ( StatusCode::NOT_FOUND, "NOT_FOUND", self.to_string(), ), AppError::Conflict { .. } => ( StatusCode::CONFLICT, "CONFLICT", self.to_string(), ), AppError::Validation(msg) => ( StatusCode::UNPROCESSABLE_ENTITY, "VALIDATION_ERROR", msg.clone(), ), AppError::Database(e) => { tracing::error!("Database error: {e}"); ( StatusCode::INTERNAL_SERVER_ERROR, "INTERNAL_ERROR", "An internal error occurred".to_string(), ) } AppError::Internal(e) => { tracing::error!("Internal error: {e:#}"); ( StatusCode::INTERNAL_SERVER_ERROR, "INTERNAL_ERROR", "An internal error occurred".to_string(), ) } }; (status, Json(serde_json::json!({ "error": { "code": code, "message": message } }))).into_response() } } // Convenient type alias for handlers pub type ApiResult<T> = Result<T, AppError>; }
Now handlers look like:
#![allow(unused)] fn main() { async fn get_user( State(pool): State<PgPool>, Path(id): Path<uuid::Uuid>, ) -> ApiResult<Json<User>> { sqlx::query_as!( User, "SELECT id, email, name, created_at FROM users WHERE id = $1", id ) .fetch_optional(&pool) .await? // sqlx::Error converts to AppError::Database via #[from] .map(Json) .ok_or_else(|| AppError::NotFound { resource: "User", id: id.to_string(), }) } }
Error Context in Logs
The {:#} format specifier on anyhow::Error prints the full error chain. This is what you want in logs:
#![allow(unused)] fn main() { match result { Err(e) => tracing::error!("Operation failed: {e:#}"), // Prints something like: // Operation failed: failed to send email: connection refused (os error 111) } }
Without #, you get only the top-level message. With #, you get the full causal chain. Use # in your logs.
The ? Operator and Implicit Conversions
The ? operator calls From::from on the error. This is why #[from] in thiserror and the From impl for ServerError above work — they hook into the ? operator's type coercion.
When ? doesn't compile, it's because there's no From impl from the error you have to the error type you need. The fix is either:
- Add a
Fromimpl (or#[from]attribute) - Map the error explicitly:
.map_err(|e| AppError::Database(e))? - Use
.context("message")?withanyhow
The third option is the escape hatch when you want to stop carrying exact error types and just get the thing to compile.
Panics in Production
Panics in async tasks don't propagate. They're caught by the Tokio runtime and surfaced as JoinError when you .await the JoinHandle. If you're not awaiting the handle, the panic is silently swallowed.
Set a panic hook to log panics before they're caught:
#![allow(unused)] fn main() { std::panic::set_hook(Box::new(|info| { let backtrace = std::backtrace::Backtrace::capture(); tracing::error!("Panic: {info}\n{backtrace}"); })); }
Or use the tokio-panic-hook crate to automatically convert task panics into errors. Either way, you want panics to appear in your logs — the default behavior of silently terminating a task while leaving the rest of the server running is a debugging nightmare.
The rule: use panic! for programming errors (invariant violations that shouldn't happen if the code is correct). Use Result for expected failure modes (network errors, invalid input, missing records). If you find yourself using unwrap() on things that could reasonably fail in production, that's a Result opportunity.
Configuration and Secrets: Environments, dotenv, and Not Leaking Things
Configuration is boring until it isn't. The interesting failure modes are: the wrong database URL getting used in production, a secret getting committed to git, a missing environment variable causing a panic at startup, or a configuration change requiring a code deploy. This chapter is about avoiding all of those.
The Pattern That Works
- Define a typed
Configstruct that represents your entire application configuration. - Load it at startup and fail loudly if anything is missing or invalid.
- Pass it through your application via
AppStateorArc<Config>. - Never read environment variables inside handlers.
The key insight is in point 4. If you call std::env::var("DATABASE_URL") inside a handler, you've hidden a startup dependency behind runtime execution. You won't discover the missing variable until a request hits that code path. Move all configuration loading to startup.
The config Crate
[dependencies]
config = "0.14"
serde = { version = "1", features = ["derive"] }
dotenvy = "0.15"
config supports layered configuration: defaults → config files → environment variables → whatever else you add. Environment variables override config files, which override defaults. This is the right behavior for twelve-factor applications.
#![allow(unused)] fn main() { use config::{Config, ConfigError, Environment, File}; use serde::Deserialize; #[derive(Debug, Deserialize, Clone)] pub struct AppConfig { pub database: DatabaseConfig, pub server: ServerConfig, pub auth: AuthConfig, pub log_level: String, } #[derive(Debug, Deserialize, Clone)] pub struct DatabaseConfig { pub url: String, pub max_connections: u32, pub min_connections: u32, } #[derive(Debug, Deserialize, Clone)] pub struct ServerConfig { pub host: String, pub port: u16, } #[derive(Debug, Deserialize, Clone)] pub struct AuthConfig { pub jwt_secret: String, pub jwt_expiration_seconds: u64, } impl AppConfig { pub fn load() -> Result<Self, ConfigError> { let env = std::env::var("APP_ENV").unwrap_or_else(|_| "development".into()); Config::builder() // Base defaults .set_default("log_level", "info")? .set_default("server.host", "0.0.0.0")? .set_default("server.port", 3000)? .set_default("database.max_connections", 20)? .set_default("database.min_connections", 2)? .set_default("auth.jwt_expiration_seconds", 3600)? // Layer 1: Base config file .add_source(File::with_name("config/default").required(false)) // Layer 2: Environment-specific config .add_source(File::with_name(&format!("config/{env}")).required(false)) // Layer 3: Environment variables (APP_DATABASE_URL, APP_AUTH_JWT_SECRET, etc.) .add_source( Environment::with_prefix("APP") .separator("_") .try_parsing(true), ) .build()? .try_deserialize() } } }
With Environment::with_prefix("APP").separator("_"), the environment variable APP_DATABASE_URL maps to config.database.url. The separator _ handles nested keys: APP_DATABASE_MAX_CONNECTIONS → database.max_connections.
dotenv for Development
In development, you don't want to set twenty environment variables every time you open a terminal. .env files solve this:
# .env (NEVER commit this file)
APP_DATABASE_URL=postgres://postgres:password@localhost/myapp_dev
APP_AUTH_JWT_SECRET=development-secret-not-for-production
fn main() { // Load .env file before reading config // In production, this is a no-op if the file doesn't exist dotenvy::dotenv().ok(); // .ok() silences the "file not found" error let config = AppConfig::load().expect("Failed to load configuration"); // ... }
dotenvy (the maintained fork of dotenv) loads the .env file into the process environment. It only sets variables that aren't already set, so actual environment variables always win. This means your production deployment doesn't need to care about .env files — they simply don't exist there.
Add .env to your .gitignore. Immediately. Before you put secrets in it.
# .gitignore
.env
.env.local
*.env
Commit a .env.example instead:
# .env.example — copy to .env and fill in values
APP_DATABASE_URL=postgres://localhost/myapp_dev
APP_AUTH_JWT_SECRET=change-me
Validating Configuration at Startup
The config crate's try_deserialize() will fail if a required field is missing or has the wrong type. But you can add richer validation:
#![allow(unused)] fn main() { use std::net::SocketAddr; impl AppConfig { pub fn load_and_validate() -> anyhow::Result<Self> { let config = Self::load()?; config.validate()?; Ok(config) } fn validate(&self) -> anyhow::Result<()> { use anyhow::ensure; ensure!( !self.auth.jwt_secret.is_empty(), "AUTH_JWT_SECRET must not be empty" ); ensure!( self.auth.jwt_secret.len() >= 32, "AUTH_JWT_SECRET must be at least 32 characters" ); ensure!( self.database.max_connections >= self.database.min_connections, "DATABASE_MAX_CONNECTIONS must be >= DATABASE_MIN_CONNECTIONS" ); // Validate that the bind address parses let addr = format!("{}:{}", self.server.host, self.server.port); addr.parse::<SocketAddr>() .map_err(|e| anyhow::anyhow!("Invalid server address {addr}: {e}"))?; Ok(()) } } }
Fail at startup with a clear message rather than failing at runtime with a cryptic one. If the configuration is wrong, the service should refuse to start, not silently degrade.
Secrets: What Not to Do
Do not:
- Commit secrets to version control. (Yes, even in "private" repos. Rotate anything you accidentally committed.)
- Log configuration values that contain secrets.
- Print the full
AppConfigstruct in logs. - Store secrets in Docker environment variables in your
docker-compose.ymlthat you then commit.
#![allow(unused)] fn main() { // BAD: Logs the JWT secret tracing::info!("Loaded config: {:?}", config); // GOOD: Implement Debug manually or use a redacting wrapper impl std::fmt::Debug for AuthConfig { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("AuthConfig") .field("jwt_expiration_seconds", &self.jwt_expiration_seconds) .field("jwt_secret", &"[REDACTED]") .finish() } } }
Or use the secrecy crate, which provides a Secret<T> wrapper that redacts the value in Debug and Display output by default:
[dependencies]
secrecy = { version = "0.8", features = ["serde"] }
#![allow(unused)] fn main() { use secrecy::{ExposeSecret, Secret}; #[derive(Deserialize, Clone)] pub struct AuthConfig { pub jwt_secret: Secret<String>, pub jwt_expiration_seconds: u64, } // Using the secret requires explicitly calling .expose_secret() let key = EncodingKey::from_secret(config.auth.jwt_secret.expose_secret().as_bytes()); }
The expose_secret() call is intentional friction. It makes secret access visible in code review and greppable.
Secrets Management in Production
For production secrets, environment variables are acceptable but not ideal. Better options:
HashiCorp Vault: Pull secrets at startup using the vaultrs crate. Rotate secrets without redeploys.
AWS Secrets Manager / GCP Secret Manager / Azure Key Vault: Cloud-native options. Credentials to access the secrets manager are handled by the cloud provider's IAM system (instance roles, workload identity), avoiding the bootstrap problem.
Kubernetes Secrets: Mounted as environment variables or files. Better than hardcoding; not as good as a dedicated secrets manager. Encode as base64 but are not encrypted at rest by default (you need to configure that separately).
A minimal secret-at-startup pattern with AWS Secrets Manager:
#![allow(unused)] fn main() { // Requires: aws-sdk-secretsmanager = "1" // Not shown in full — illustrative only pub async fn load_secret(secret_name: &str) -> anyhow::Result<String> { let config = aws_config::load_from_env().await; let client = aws_sdk_secretsmanager::Client::new(&config); let response = client .get_secret_value() .secret_id(secret_name) .send() .await .context("Failed to retrieve secret")?; response .secret_string() .map(|s| s.to_string()) .ok_or_else(|| anyhow::anyhow!("Secret {secret_name} has no string value")) } }
The pattern is always the same: load secrets at startup, validate them, inject them into your AppConfig, then never access the secrets manager again during request handling.
Config File Example
A config/default.toml for local development defaults:
[server]
host = "127.0.0.1"
port = 3000
[database]
max_connections = 5
min_connections = 1
[auth]
jwt_expiration_seconds = 3600
log_level = "debug"
Environment-specific overrides in config/production.toml:
[server]
host = "0.0.0.0"
[database]
max_connections = 20
min_connections = 5
log_level = "info"
Secrets (database.url, auth.jwt_secret) never appear in committed config files — only in environment variables or a secrets manager.
Wiring It All Together
use std::sync::Arc; #[tokio::main] async fn main() -> anyhow::Result<()> { // Load .env for development; no-op in production dotenvy::dotenv().ok(); // Initialize tracing before doing anything else // so that config loading failures are logged init_tracing(); let config = AppConfig::load_and_validate() .context("Configuration error — check environment variables")?; tracing::info!( host = %config.server.host, port = %config.server.port, "Starting server" ); let pool = create_pool(&config.database.url, &config.database) .await .context("Failed to create database pool")?; run_migrations(&pool).await?; let state = AppState { pool, config: Arc::new(config), }; let app = create_app(state.clone()); let addr = format!("{}:{}", state.config.server.host, state.config.server.port); let listener = tokio::net::TcpListener::bind(&addr).await?; tracing::info!("Listening on {addr}"); axum::serve(listener, app) .with_graceful_shutdown(shutdown_signal()) .await?; Ok(()) }
The main function returns anyhow::Result<()> — any unhandled error causes the process to exit with a message. This is fine for startup errors. During request handling, you use the typed error approach from the previous chapter.
Testing Backend Services: Unit, Integration, and the Awkward Middle
Testing a backend service is harder than testing a library because your code is entangled with infrastructure: databases, HTTP, file systems, external APIs. The standard advice is "mock everything" or "test everything against real infrastructure." Both extremes have problems. This chapter is about finding the pragmatic middle.
Unit Tests: The Easy Part
Pure functions with no I/O are straightforward to test. Rust's built-in test runner handles them well:
#![allow(unused)] fn main() { // src/domain/pricing.rs pub fn calculate_discount( base_price_cents: i64, discount_percent: f64, ) -> Result<i64, PricingError> { if discount_percent < 0.0 || discount_percent > 100.0 { return Err(PricingError::InvalidDiscount { percent: discount_percent }); } let discount = (base_price_cents as f64 * discount_percent / 100.0) as i64; Ok(base_price_cents - discount) } #[cfg(test)] mod tests { use super::*; #[test] fn test_discount_calculation() { assert_eq!(calculate_discount(1000, 10.0), Ok(900)); assert_eq!(calculate_discount(1000, 0.0), Ok(1000)); assert_eq!(calculate_discount(1000, 100.0), Ok(0)); } #[test] fn test_discount_rejects_invalid_percent() { assert!(calculate_discount(1000, -1.0).is_err()); assert!(calculate_discount(1000, 101.0).is_err()); } #[test] fn test_discount_rounds_down() { // 1/3 of $9.99 = $3.33, not $3.34 assert_eq!(calculate_discount(999, 33.33), Ok(666)); } } }
No dependencies, no fixtures, no infrastructure. The compiler and the test runner are all you need.
Testing Async Code
Async unit tests use #[tokio::test]:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[tokio::test] async fn test_password_verification() { let hash = hash_password_async("correct-horse-battery-staple".to_string()) .await .unwrap(); let valid = verify_password_async( "correct-horse-battery-staple".to_string(), hash.clone(), ) .await .unwrap(); assert!(valid); let invalid = verify_password_async( "wrong-password".to_string(), hash, ) .await .unwrap(); assert!(!invalid); } } }
#[tokio::test] spins up a Tokio runtime for the test. By default it uses the current-thread (single-threaded) runtime, which is deterministic and appropriate for unit tests. If you need the multi-threaded scheduler, use #[tokio::test(flavor = "multi_thread")].
Integration Tests with a Real Database
This is where it gets interesting. SQLx's compile-time query verification means your queries are checked against the schema, but it doesn't test that they return the right results. For that, you need a real database.
The standard approach: a test database that gets reset between test runs (or between tests).
Test Database Setup
# Create a test database
createdb myapp_test
# Run migrations
DATABASE_URL=postgres://localhost/myapp_test cargo sqlx migrate run
Add to your Cargo.toml:
[dev-dependencies]
tokio = { version = "1", features = ["full", "test-util"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "postgres", "macros", "uuid", "time"] }
Test Fixtures and Cleanup
#![allow(unused)] fn main() { // tests/common/mod.rs use sqlx::PgPool; pub async fn create_test_pool() -> PgPool { let database_url = std::env::var("TEST_DATABASE_URL") .unwrap_or_else(|_| "postgres://localhost/myapp_test".to_string()); let pool = sqlx::postgres::PgPoolOptions::new() .max_connections(5) .connect(&database_url) .await .expect("Failed to connect to test database"); // Run migrations to ensure schema is current sqlx::migrate!("./migrations") .run(&pool) .await .expect("Migrations failed"); pool } pub async fn truncate_tables(pool: &PgPool) { sqlx::query!("TRUNCATE TABLE users, sessions RESTART IDENTITY CASCADE") .execute(pool) .await .expect("Failed to truncate tables"); } }
Database Integration Tests
#![allow(unused)] fn main() { // tests/user_repository.rs mod common; #[tokio::test] async fn test_create_and_fetch_user() { let pool = common::create_test_pool().await; common::truncate_tables(&pool).await; // Create a user let user = sqlx::query_as!( User, r#" INSERT INTO users (email, name, password_hash) VALUES ($1, $2, $3) RETURNING id, email, name, created_at, updated_at "#, "alice@example.com", "Alice", "hashed_password" ) .fetch_one(&pool) .await .expect("Failed to create user"); assert_eq!(user.email, "alice@example.com"); assert_eq!(user.name, "Alice"); // Fetch it back let fetched = sqlx::query_as!( User, "SELECT id, email, name, created_at, updated_at FROM users WHERE id = $1", user.id ) .fetch_optional(&pool) .await .expect("Failed to fetch user"); assert!(fetched.is_some()); assert_eq!(fetched.unwrap().id, user.id); } #[tokio::test] async fn test_unique_email_constraint() { let pool = common::create_test_pool().await; common::truncate_tables(&pool).await; let create = |email: &'static str| async { sqlx::query!( "INSERT INTO users (email, name, password_hash) VALUES ($1, $2, $3)", email, "Test User", "hash" ) .execute(&pool) .await }; create("same@example.com").await.expect("First insert should succeed"); let result = create("same@example.com").await; assert!(result.is_err()); let err = result.unwrap_err(); if let sqlx::Error::Database(db_err) = &err { assert!(db_err.constraint().is_some(), "Expected unique constraint violation"); } else { panic!("Expected database error, got: {err}"); } } }
The truncate_tables call between tests is important. Tests share a pool and a database, so they share state. TRUNCATE ... CASCADE removes all rows and cascades to dependent tables. RESTART IDENTITY resets sequences. Tests that don't clean up after themselves cause phantom failures in other tests — the most infuriating class of intermittent CI failure.
Integration Tests with the Full HTTP Stack
axum::test (via axum-test or the built-in test helpers) lets you test your entire router without binding to a port:
[dev-dependencies]
axum-test = "14"
serde_json = "1"
#![allow(unused)] fn main() { // tests/api_users.rs use axum_test::TestServer; use serde_json::json; mod common; async fn build_test_app(pool: sqlx::PgPool) -> TestServer { let state = AppState { pool, config: std::sync::Arc::new(AppConfig::test_defaults()), }; let app = create_app(state); TestServer::new(app).unwrap() } #[tokio::test] async fn test_create_user_returns_201() { let pool = common::create_test_pool().await; common::truncate_tables(&pool).await; let server = build_test_app(pool).await; let response = server .post("/api/v1/users") .json(&json!({ "email": "bob@example.com", "name": "Bob", "password": "secure-password-123" })) .await; response.assert_status_success(); let body: serde_json::Value = response.json(); assert_eq!(body["email"], "bob@example.com"); assert!(body["id"].is_string()); } #[tokio::test] async fn test_create_user_rejects_duplicate_email() { let pool = common::create_test_pool().await; common::truncate_tables(&pool).await; let server = build_test_app(pool).await; let payload = json!({ "email": "duplicate@example.com", "name": "User", "password": "password123" }); server.post("/api/v1/users").json(&payload).await.assert_status_success(); let second_response = server.post("/api/v1/users").json(&payload).await; second_response.assert_status(axum::http::StatusCode::CONFLICT); } #[tokio::test] async fn test_get_user_requires_authentication() { let pool = common::create_test_pool().await; let server = build_test_app(pool).await; let response = server.get("/api/v1/users/me").await; response.assert_status(axum::http::StatusCode::UNAUTHORIZED); } #[tokio::test] async fn test_authenticated_request() { let pool = common::create_test_pool().await; common::truncate_tables(&pool).await; let server = build_test_app(pool).await; // Create user let create_response = server .post("/api/v1/users") .json(&json!({"email": "carol@example.com", "name": "Carol", "password": "pass123"})) .await; create_response.assert_status_success(); // Login let login_response = server .post("/api/v1/auth/login") .json(&json!({"email": "carol@example.com", "password": "pass123"})) .await; login_response.assert_status_success(); let token: String = login_response.json::<serde_json::Value>()["access_token"] .as_str() .unwrap() .to_string(); // Authenticated request let me_response = server .get("/api/v1/users/me") .add_header("Authorization", format!("Bearer {token}").parse().unwrap()) .await; me_response.assert_status_success(); let user: serde_json::Value = me_response.json(); assert_eq!(user["email"], "carol@example.com"); } }
These tests exercise the full stack: routing, middleware, extractors, handlers, and the database. They're slower than unit tests but catch integration failures that unit tests can't.
The Awkward Middle: Mocking
Sometimes you need to test code that calls external services without actually calling them. The standard Rust approach is trait-based mocking:
#![allow(unused)] fn main() { // Define a trait for the external dependency #[async_trait::async_trait] pub trait EmailService: Send + Sync { async fn send_welcome_email(&self, email: &str, name: &str) -> Result<(), EmailError>; } // Real implementation pub struct SendgridEmailService { api_key: String, } #[async_trait::async_trait] impl EmailService for SendgridEmailService { async fn send_welcome_email(&self, email: &str, name: &str) -> Result<(), EmailError> { // ... actual HTTP call to Sendgrid todo!() } } // Your handler takes the trait, not the concrete type pub async fn register_user<E: EmailService>( email_service: &E, pool: &sqlx::PgPool, payload: CreateUser, ) -> Result<User, AppError> { let user = create_user(pool, payload).await?; email_service.send_welcome_email(&user.email, &user.name).await?; Ok(user) } }
Test with a mock:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; struct MockEmailService { called: Arc<AtomicBool>, should_fail: bool, } #[async_trait::async_trait] impl EmailService for MockEmailService { async fn send_welcome_email(&self, _email: &str, _name: &str) -> Result<(), EmailError> { self.called.store(true, Ordering::SeqCst); if self.should_fail { Err(EmailError::ServiceUnavailable) } else { Ok(()) } } } #[tokio::test] async fn test_register_sends_welcome_email() { let pool = crate::common::create_test_pool().await; let called = Arc::new(AtomicBool::new(false)); let email_service = MockEmailService { called: called.clone(), should_fail: false }; let result = register_user( &email_service, &pool, CreateUser { email: "dave@example.com".into(), name: "Dave".into() }, ).await; assert!(result.is_ok()); assert!(called.load(Ordering::SeqCst), "Email should have been sent"); } } }
For more complex mocking needs, mockall generates mock implementations automatically. But start with hand-rolled mocks — they're simpler, easier to debug, and don't require a proc-macro dependency.
Running Tests
# Run all tests
cargo test
# Run tests with database logging
TEST_DATABASE_URL=postgres://localhost/myapp_test cargo test
# Run a specific test
cargo test test_create_user_returns_201
# Run tests with output (don't capture stdout)
cargo test -- --nocapture
# Run tests in a single thread (useful when tests share database state)
cargo test -- --test-threads=1
The -- --test-threads=1 flag is important when your tests share a database. Parallel tests that truncate tables will stomp on each other. Either run single-threaded, or give each test its own database (using transactions that you roll back, or sqlx's TestPool if it fits your version).
What to Test and What to Skip
Test:
- Business logic in pure functions — always
- Database queries — at least the non-trivial ones
- Happy path and error cases in HTTP handlers
- Authentication and authorization boundaries
- Constraint violations (unique, foreign key, not null)
Skip (or test separately via end-to-end):
- Framework behavior you don't own (Axum's routing, SQLx's type mapping)
- Pure configuration wiring
- Code that's just boilerplate
The signal is: if this test is going to tell you something you didn't already know when you wrote the code, write it. If it's testing that Axum routes requests correctly — you can trust Axum to do that.
Observability: Tracing, Logging, and Knowing What Broke
The relationship between tracing and log in Rust is like the relationship between a well-organized crime scene and someone's diary. Both record events. One is structured, contextual, and queryable. The other is a sequence of strings.
Use tracing. It's the standard for async Rust services. The log crate still exists and still works, but tracing is what you want for production services — it supports structured key-value fields, spans that capture context across async await points, and a subscriber system that can route events to multiple backends simultaneously.
Setup
[dependencies]
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = [
"env-filter",
"json",
"fmt",
] }
#![allow(unused)] fn main() { use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt, EnvFilter}; pub fn init_tracing() { tracing_subscriber::registry() .with( EnvFilter::try_from_default_env() .unwrap_or_else(|_| EnvFilter::new("info")), ) .with( tracing_subscriber::fmt::layer() .with_target(true) .with_thread_ids(false), ) .init(); } }
EnvFilter::try_from_default_env() reads the RUST_LOG environment variable. Set it to control verbosity:
RUST_LOG=info # All crates at info level
RUST_LOG=myapp=debug,sqlx=warn # Fine-grained control
RUST_LOG=myapp::handlers=trace # Module-level control
The Difference Between Events and Spans
An event is a point-in-time occurrence. A span is a period of time with a start and end, representing a unit of work.
#![allow(unused)] fn main() { // Event: records something that happened tracing::info!("Request received"); tracing::warn!(user_id = %id, "User not found"); tracing::error!(error = %e, "Database connection failed"); // Span: wraps a unit of work async fn handle_request(user_id: Uuid) { let span = tracing::info_span!("handle_request", %user_id); let _guard = span.enter(); // or use .instrument() // Events inside this span are associated with it tracing::info!("Processing request"); do_work().await; tracing::info!("Request complete"); } }
The key insight: in async code, a span needs to be attached to a Future, not entered with .enter(). If you use .enter() in async code, the span guard is alive on one thread while the future runs on another — the recorded scope becomes meaningless.
Use .instrument():
#![allow(unused)] fn main() { use tracing::Instrument; async fn process_order(order_id: Uuid, user_id: Uuid) -> Result<(), AppError> { async move { tracing::info!("Starting order processing"); let items = fetch_order_items(order_id).await?; tracing::info!(item_count = items.len(), "Fetched order items"); charge_payment(user_id, calculate_total(&items)).await?; tracing::info!("Payment charged"); Ok(()) } .instrument(tracing::info_span!("process_order", %order_id, %user_id)) .await } }
Everything inside the instrumented future runs in the context of the span. When the future yields at .await points, the span is correctly suspended and resumed with the future. Nested calls automatically see the parent span in their context.
Structured Fields
This is the whole point. Unstructured:
#![allow(unused)] fn main() { tracing::info!("User {} logged in from {}", user_id, ip_address); }
Structured:
#![allow(unused)] fn main() { tracing::info!( user_id = %user_id, ip = %ip_address, "User logged in" ); }
The % sigil uses Display. Use ? for Debug. Fields without a sigil use the value directly (for types that implement tracing::Value).
In structured form, user_id and ip are first-class fields, not parts of a string. Your log aggregation system can filter by them, count them, and alert on them. You cannot meaningfully query unstructured log strings at scale.
#![allow(unused)] fn main() { tracing::error!( error = %e, error.details = ?e, // Debug representation for the full error user_id = %auth.user_id, request_id = %req_id, "Payment failed" ); }
Axum Integration with tower-http's TraceLayer
You added TraceLayer in the HTTP chapter. Here's what it's doing and how to customize it:
#![allow(unused)] fn main() { use tower_http::trace::{DefaultMakeSpan, DefaultOnResponse, TraceLayer}; use tracing::Level; let trace_layer = TraceLayer::new_for_http() .make_span_with( DefaultMakeSpan::new() .level(Level::INFO) .include_headers(false), // Don't log all headers by default ) .on_response( DefaultOnResponse::new() .level(Level::INFO) .latency_unit(tower_http::LatencyUnit::Millis), ); }
Or write custom span creation:
#![allow(unused)] fn main() { use axum::extract::Request; use tower_http::trace::MakeSpan; #[derive(Clone)] struct CustomMakeSpan; impl MakeSpan<axum::body::Body> for CustomMakeSpan { fn make_span(&mut self, request: &Request<axum::body::Body>) -> tracing::Span { let request_id = request .headers() .get("x-request-id") .and_then(|v| v.to_str().ok()) .unwrap_or("unknown"); tracing::info_span!( "request", method = %request.method(), uri = %request.uri(), request_id = %request_id, ) } } }
JSON Logging for Production
In development, human-readable output is fine. In production, you want JSON so your log aggregator (Datadog, CloudWatch, Loki, etc.) can parse it:
#![allow(unused)] fn main() { pub fn init_tracing(json: bool) { let registry = tracing_subscriber::registry() .with( EnvFilter::try_from_default_env() .unwrap_or_else(|_| EnvFilter::new("info")), ); if json { registry .with(tracing_subscriber::fmt::layer().json()) .init(); } else { registry .with(tracing_subscriber::fmt::layer().pretty()) .init(); } } // In main: let json_logs = std::env::var("APP_ENV") .map(|e| e == "production") .unwrap_or(false); init_tracing(json_logs); }
Each log line becomes a JSON object:
{
"timestamp": "2024-01-15T10:23:45.123456Z",
"level": "INFO",
"target": "myapp::handlers::users",
"span": {"name": "request", "method": "POST", "uri": "/api/v1/users"},
"fields": {
"message": "User created",
"user_id": "550e8400-e29b-41d4-a716-446655440000"
}
}
Distributed Tracing with OpenTelemetry
For systems with multiple services, you want traces to span service boundaries. OpenTelemetry is the standard:
[dependencies]
opentelemetry = { version = "0.22", features = ["trace"] }
opentelemetry-otlp = { version = "0.15", features = ["tonic"] }
opentelemetry_sdk = { version = "0.22", features = ["rt-tokio", "trace"] }
tracing-opentelemetry = "0.23"
#![allow(unused)] fn main() { use opentelemetry::global; use opentelemetry_otlp::WithExportConfig; use opentelemetry_sdk::runtime; use tracing_opentelemetry::OpenTelemetryLayer; pub fn init_tracing_with_otel(service_name: &str, otlp_endpoint: &str) { // Initialize OTLP exporter (sends to Jaeger, Tempo, etc.) let tracer = opentelemetry_otlp::new_pipeline() .tracing() .with_exporter( opentelemetry_otlp::new_exporter() .tonic() .with_endpoint(otlp_endpoint), ) .with_trace_config( opentelemetry_sdk::trace::Config::default() .with_resource(opentelemetry_sdk::Resource::new(vec![ opentelemetry::KeyValue::new("service.name", service_name.to_string()), ])), ) .install_batch(runtime::Tokio) .expect("Failed to install OpenTelemetry tracer"); tracing_subscriber::registry() .with(EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info"))) .with(tracing_subscriber::fmt::layer().json()) .with(OpenTelemetryLayer::new(tracer)) .init(); } // Graceful shutdown: flush pending spans pub fn shutdown_tracing() { global::shutdown_tracer_provider(); } }
With this setup, your tracing spans are automatically exported to your observability backend. Jaeger, Grafana Tempo, Honeycomb, Datadog — they all support the OTLP protocol.
Request IDs
Every request should get a unique ID that's logged with every event during that request's lifetime:
#![allow(unused)] fn main() { use tower_http::request_id::{MakeRequestUuid, PropagateRequestIdLayer, SetRequestIdLayer}; // In your router: let app = Router::new() .route("/", get(handler)) .layer( tower::ServiceBuilder::new() // Set X-Request-Id header on incoming requests that don't have one .layer(SetRequestIdLayer::x_request_id(MakeRequestUuid)) // Propagate X-Request-Id to outgoing responses .layer(PropagateRequestIdLayer::x_request_id()) // TraceLayer should come after request ID is set .layer(TraceLayer::new_for_http().make_span_with(CustomMakeSpan)), ); }
Then in your custom span maker, read the request ID and include it in the span. Every log event within the request's span will include the request ID. When a user reports a problem, they give you their request ID and you can find all log events for that request immediately.
Practical Log Levels
| Level | Use for |
|---|---|
ERROR | Things that are broken and need immediate attention |
WARN | Things that are wrong but the system handled them |
INFO | Normal operational events (request received, user created, job completed) |
DEBUG | Diagnostic information useful during development |
TRACE | Very detailed debugging (query parameters, response bodies, every step) |
In production, run at INFO. In development, DEBUG is usually enough. TRACE will give you more information than you wanted.
Do not use ERROR for "user not found" or "invalid input" — those are expected conditions that the application handles. ERROR means something is broken: a dependency is unreachable, a supposed invariant was violated, a budget was exceeded. If your error rate dashboard is full of 404s, you can't see the real errors.
What You Actually Need in Production
At minimum:
- JSON structured logging with level, timestamp, service name
- Request ID on every request, propagated in spans
- Latency, status code, and method/path logged per request
- ERROR-level events for unexpected failures
Nice to have: 5. OpenTelemetry export to a trace backend 6. Correlation IDs propagated to downstream service calls 7. Structured error context (error type, message, affected resource)
You can start with just 1-4 and add the rest as your system grows. The important thing is that when production is broken at midnight, you can look at your logs and understand what happened. Unstructured logs make that harder than it needs to be.
Deploying Rust: Docker, Binaries, and Why Your Image Can Be Tiny
This is where Rust pays for itself in ways that are immediately visible. A Rust binary is statically compiled, has no runtime dependency, and doesn't need an interpreter, a VM, or a garbage collector running alongside it. A Docker image for a Rust service can be 10-50MB. Your Python service's image is probably 800MB and you've stopped thinking about it.
More importantly: the binary is the deployment artifact. There's no gem bundle to install, no pip packages to resolve, no node_modules to copy. You ship a file. It runs.
Building a Release Binary
# Debug build (fast to compile, slow to run, includes debug info)
cargo build
# Release build (slow to compile, fast to run, optimized)
cargo build --release
The release binary ends up at target/release/your-binary-name. It's self-contained. Copy it to any Linux machine with a compatible libc and run it.
# Check the binary size
ls -lh target/release/myapp
# Check what it links against
ldd target/release/myapp
The ldd output shows dynamic library dependencies. A typical Rust binary links against libgcc_s.so, libc.so.6, and libm.so.6 — the core system libraries. You can statically link even these with musl.
Static Linking with musl
For the smallest and most portable binaries, compile against musl libc:
# Add the musl target
rustup target add x86_64-unknown-linux-musl
# Build
cargo build --release --target x86_64-unknown-linux-musl
On macOS, you need a musl cross-compiler. The easiest path is Docker:
docker run --rm \
-v $(pwd):/app \
-w /app \
rust:alpine \
cargo build --release --target x86_64-unknown-linux-musl
The resulting binary has zero dynamic library dependencies. You can run it in a scratch container (a Docker image with literally nothing in it).
Multi-Stage Docker Build
This is the standard pattern. The first stage compiles. The second stage runs.
# syntax=docker/dockerfile:1
# ─── Build Stage ──────────────────────────────────────────────────────────────
FROM rust:1.75-slim-bookworm AS builder
WORKDIR /app
# Cache dependencies separately from source
# Copy manifests first
COPY Cargo.toml Cargo.lock ./
# Create a dummy source file so cargo can build the dependency tree
RUN mkdir src && echo "fn main() {}" > src/main.rs
# Build dependencies (this layer is cached as long as Cargo.toml/lock don't change)
RUN cargo build --release --locked && rm -f target/release/deps/myapp*
# Now copy the real source and build the application
COPY src ./src
COPY migrations ./migrations
RUN cargo build --release --locked
# ─── Runtime Stage ────────────────────────────────────────────────────────────
FROM debian:bookworm-slim AS runtime
# Install runtime dependencies (only what's needed)
RUN apt-get update && apt-get install -y \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create a non-root user
RUN useradd -r -s /bin/false -m -d /app appuser
WORKDIR /app
# Copy the binary from the build stage
COPY --from=builder /app/target/release/myapp .
# Copy migrations if you run them at startup
COPY --from=builder /app/migrations ./migrations
USER appuser
EXPOSE 3000
ENV APP_ENV=production
ENV RUST_LOG=info
ENTRYPOINT ["./myapp"]
Using distroless for Even Smaller Images
Google's distroless images contain only the application and its runtime dependencies — no shell, no package manager, no utilities. Smaller attack surface, smaller image:
FROM gcr.io/distroless/cc-debian12 AS runtime
COPY --from=builder /app/target/release/myapp /app/myapp
USER nonroot:nonroot
EXPOSE 3000
ENTRYPOINT ["/app/myapp"]
The cc variant includes the C runtime library (glibc). If you compiled with musl:
FROM scratch AS runtime
COPY --from=builder /app/target/x86_64-unknown-linux-musl/release/myapp /app/myapp
USER 65534:65534
EXPOSE 3000
ENTRYPOINT ["/app/myapp"]
scratch is an empty image. The binary is the entire container. Image size will be roughly the binary size plus any static data you copy in.
Size Comparison
For a typical Axum service with Tokio, SQLx, and the tracing stack:
| Base image | Approximate size |
|---|---|
rust:1.75 (full SDK) | ~1.7 GB |
debian:bookworm-slim + binary | ~80 MB |
distroless/cc + binary | ~30 MB |
scratch + musl binary | ~8-15 MB |
The difference between debian:bookworm-slim and scratch is not just academic — it matters for pull times, storage costs, and the surface area available to a container escape.
Optimizing Build Times
Rust compile times are long. Docker layer caching helps, but you can do more:
cargo-chef for Better Layer Caching
The cargo-chef tool generates a recipe file from your dependency tree that can be cached independently of your source:
FROM lukemathwalker/cargo-chef:latest-rust-1.75-slim AS chef
WORKDIR /app
# ─── Planner Stage ────────────────────────────────────────────────────────────
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
# ─── Builder Stage ────────────────────────────────────────────────────────────
FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies (cached when recipe.json is unchanged)
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --locked --bin myapp
# ─── Runtime Stage ────────────────────────────────────────────────────────────
FROM debian:bookworm-slim AS runtime
# ... (same as before)
With cargo-chef, the dependency compilation layer is cached even when your source files change. This turns a 5-minute build into a 30-second build after the first run, which is the difference between "deploy in CI" and "deploy in CI after you've made coffee and checked your email."
Releasing Without Docker
Sometimes the right deployment is just a binary. Linux servers, AWS Lambda (via the lambda_http crate), or bare metal. Build the binary, copy it, run it.
For cross-compilation to different targets:
# From macOS to Linux x86_64
rustup target add x86_64-unknown-linux-gnu
cargo build --release --target x86_64-unknown-linux-gnu
# You'll need a cross-linker; use the `cross` tool for simplicity
cargo install cross
cross build --release --target x86_64-unknown-linux-gnu
cross uses Docker containers with the appropriate cross-compilation toolchain. It's the easiest way to cross-compile without configuring linkers manually.
Health Checks
Your container orchestrator (Kubernetes, ECS, Fly.io) needs to know if your service is healthy. Implement a health endpoint:
#![allow(unused)] fn main() { use axum::{http::StatusCode, response::Json, routing::get, Router}; use serde::Serialize; #[derive(Serialize)] struct HealthResponse { status: &'static str, version: &'static str, } async fn health(State(pool): State<sqlx::PgPool>) -> Result<Json<HealthResponse>, StatusCode> { // Verify database connectivity sqlx::query!("SELECT 1 as check") .fetch_one(&pool) .await .map_err(|_| StatusCode::SERVICE_UNAVAILABLE)?; Ok(Json(HealthResponse { status: "ok", version: env!("CARGO_PKG_VERSION"), })) } }
Use env!("CARGO_PKG_VERSION") to embed the version from Cargo.toml at compile time. It costs nothing and makes debugging much easier when you're trying to figure out which version is deployed.
In your Dockerfile:
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD ["./myapp", "--health-check"] \
# or use curl if available:
# CMD curl -f http://localhost:3000/health || exit 1
Environment Variables in Production
Pass configuration via environment variables, not config files baked into the image:
docker run -d \
-e APP_DATABASE_URL="postgres://user:pass@host/db" \
-e APP_AUTH_JWT_SECRET="production-secret" \
-e APP_ENV="production" \
-e RUST_LOG="info" \
-p 3000:3000 \
myapp:latest
Or in a docker-compose.yml for local development (use .env for the actual values, not hardcoded in compose):
services:
app:
image: myapp:latest
ports:
- "3000:3000"
environment:
- APP_DATABASE_URL
- APP_AUTH_JWT_SECRET
- APP_ENV=development
- RUST_LOG=debug
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp_dev
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
The Complete CI/CD Picture
Build, test, push, deploy:
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp_test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Cache Rust dependencies
uses: Swatinem/rust-cache@v2
- name: Install sqlx-cli
run: cargo install sqlx-cli --no-default-features --features rustls,postgres
- name: Run migrations
run: sqlx migrate run
env:
DATABASE_URL: postgres://postgres:password@localhost/myapp_test
- name: Test
run: cargo test
env:
TEST_DATABASE_URL: postgres://postgres:password@localhost/myapp_test
build-and-push:
needs: test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: docker build -t myapp:${{ github.sha }} .
- name: Push to registry
run: |
echo ${{ secrets.REGISTRY_PASSWORD }} | docker login -u ${{ secrets.REGISTRY_USER }} --password-stdin
docker push myapp:${{ github.sha }}
The important bit: run tests before building the release image. Don't push broken images.
Runtime Performance
Rust in production is fast. A few things that matter:
Use --release. Debug builds are 10-100x slower. This is not an exaggeration. Debug builds include bounds checks, no optimizations, and debug assertions. Release builds get inlining, vectorization, and dead code elimination.
Tune the connection pool. Too small: requests queue waiting for connections. Too large: you exhaust the database server's connection limit. Start with 10-20 and tune based on observed pool utilization.
Don't run migrations on every startup in high-availability deployments. Running sqlx::migrate! on startup is fine for single-instance services. For rolling deploys with multiple instances starting simultaneously, migrations can conflict. Use a migration lock or run migrations as a pre-deployment step.
Monitor memory usage. Rust won't leak memory in the same way managed-runtime languages do, but it can still accumulate allocations — unbounded caches, retained connection pools, growing queues. Watch RSS growth over time.
Where to Go From Here
You now have a working mental model of the Rust backend stack: async execution under Tokio, HTTP with Axum and Tower, data persistence with SQLx, authentication patterns, a principled approach to errors, structured configuration, observability, and deployment. That's a real backend service.
What you've also accumulated, without necessarily noticing, is an intuition for why the design is the way it is. Axum's extractors make sense because FromRequest is a trait that anything can implement. Error handling is clear because thiserror and anyhow serve different purposes. Async works the way it does because futures are state machines, not threads.
This section is about where to go deeper.
Things Not Covered Here
WebSockets: Axum has built-in WebSocket support via axum::extract::ws. The upgrade dance is handled for you; you write the message handling logic. Works well with Tokio's broadcast channels for pub/sub patterns.
gRPC: The tonic crate is the standard for gRPC in Rust. It generates Rust code from .proto files and integrates with the Tokio ecosystem. If you're building internal service-to-service APIs or you need bidirectional streaming, it's worth knowing.
Message queues: lapin for AMQP (RabbitMQ), rdkafka for Kafka, aws-sdk-sqs for SQS. The async patterns are the same as everything else. Background workers are typically tokio::spawned tasks consuming from a channel.
Background jobs: apalis is a job queue library built for Rust that supports PostgreSQL, Redis, and SQLite as storage backends. For simpler needs, a tokio::spawned loop with a Tokio channel works fine.
Rate limiting: tower-governor or axum-governor provide rate limiting middleware based on the governor crate. Integrate at the router level before your handlers.
Caching: moka is a high-performance in-memory cache for Rust. For distributed caching, redis-rs (or deadpool-redis for async connection pooling) is the standard choice.
GraphQL: async-graphql integrates with Axum and Tokio. It's well-maintained and feature-complete. If you need GraphQL, it's the right choice.
The Crates Worth Knowing
Beyond what's in this book:
serde_with— extra serialization adapters for serde (serialize UUIDs as strings, durations in various formats, etc.)validator— derive-based input validationgarde— a newer validation crate with a cleaner APIonce_cell/std::sync::OnceLock— lazy initialization for static datadashmap— concurrent HashMap without needing a Mutexbytes— theBytestype used throughout the async ecosystem for zero-copy byte manipulationreqwest— async HTTP client (the standard choice for making outbound HTTP calls)tokio-retry— retry logic for fallible async operationsbackoff— exponential backoff strategies
Resources That Are Actually Good
The Tokio tutorial (tokio.rs/tokio/tutorial) — the best explanation of how Tokio works from the people who built it. Read the "Select" and "Streams" chapters if you haven't.
Jon Gjengset's videos — "Crust of Rust" on YouTube. If you want to understand the machinery behind async, his video on async/await fundamentals builds the executor from scratch.
The axum examples directory on GitHub — the official examples cover JWT auth, multipart forms, WebSockets, static file serving, and more. When you want to do something specific with Axum, start there.
"Zero To Production In Rust" by Luca Palmieri — a full-length book about building a production email newsletter service in Rust. Goes deeper on testing and deployment than this book does. Worth reading after you've got the basics.
The SQLx documentation and its GitHub issues — the compile-time query verification generates occasionally confusing errors. The GitHub issues are searchable and usually contain the answer.
A Note on Ecosystem Maturity
The Rust backend ecosystem in 2025 is genuinely production-ready. Axum is used by companies doing meaningful scale. SQLx is in production at places that care deeply about database correctness. Tokio is the foundation under all of it.
The ecosystem is also still evolving. Axum releases occasionally have breaking changes. SQLx 0.7 → 0.8 will have them too. The pattern throughout this book — pin your crate versions, understand what you're using, read the changelogs — is good practice precisely because the ecosystem is alive rather than fossilized.
The things that won't change: the async execution model, the error handling philosophy, the way Tower middleware composes, the guarantees that SQLx's compile-time checking provides. Learn the principles, and the API churn won't slow you down.
What Building in Rust Feels Like After a While
The compiler is not your adversary. It is your most pedantic colleague who has read every RFC and will not let you ship something subtly wrong. After you've been writing Rust for a while, the feeling of "it compiles" carries genuine informational weight — not certainty that the logic is correct, but confidence that a large class of errors has been ruled out before you ran a single test.
The trade-off is real: Rust is harder to write than Python or JavaScript. You spend more time on types and lifetimes and error conversions. In return, you get faster services, smaller deployments, and production issues that are structurally different from what you'd debug in a GC'd language. Whether the trade-off is worth it depends on what you're building.
For backend services with real reliability requirements, long service lifetimes, and teams who will maintain the code for years: it usually is.
Go build something.