Databases with SQLx: Async, Typed, and Honest

SQLx is the async database library for Rust. It's not an ORM — there's no query builder, no magical User::find_by_id() method, no DSL to learn. You write SQL. SQLx runs it and maps the results to Rust types. If your SQL is wrong, the compiler tells you at compile time. This is the correct trade-off.

The headline feature is compile-time query verification: the query! macro connects to your database during compilation, sends the query to the server for parsing and type analysis, and uses the returned metadata to verify that your Rust code handles the result types correctly. No more "column 'user_id' does not exist" errors at 3am.

Setup

[dependencies]
sqlx = { version = "0.7", features = [
    "runtime-tokio-rustls",
    "postgres",        # or "sqlite", "mysql"
    "macros",          # for query!, query_as!, etc.
    "migrate",         # for migrations
    "uuid",            # map UUID columns to uuid::Uuid
    "time",            # map timestamp columns to time::OffsetDateTime
    "json",            # map jsonb columns to serde_json::Value
] }
uuid = { version = "1", features = ["v4"] }
time = { version = "0.3", features = ["serde"] }

The runtime-tokio-rustls feature uses Tokio as the async runtime and rustls for TLS (no OpenSSL dependency needed). If you need native-tls, swap accordingly.

Connection Pools

You never want a single database connection in a production service. You want a pool:

#![allow(unused)]
fn main() {
use sqlx::postgres::PgPoolOptions;
use sqlx::PgPool;

pub async fn create_pool(database_url: &str) -> Result<PgPool, sqlx::Error> {
    PgPoolOptions::new()
        .max_connections(20)
        .min_connections(2)
        .acquire_timeout(std::time::Duration::from_secs(5))
        .idle_timeout(std::time::Duration::from_secs(600))
        .max_lifetime(std::time::Duration::from_secs(1800))
        .connect(database_url)
        .await
}
}

What these options mean:

  • max_connections(20): maximum pool size. Size this based on your database server's max_connections limit, divided across your service instances. Postgres default is 100; if you run 5 instances with pools of 20, you're using 100 connections. That math matters.
  • min_connections(2): keep at least 2 connections warm. Cold connections have latency.
  • acquire_timeout: how long to wait for a connection from the pool before failing. Without this, a slow query cascade can cause all incoming requests to pile up waiting for connections forever.
  • idle_timeout / max_lifetime: recycle connections that might be in a bad state.

PgPool is Clone, and cloning it is cheap — it clones a reference to the same pool. Pass it around by cloning.

The query! Macro

#![allow(unused)]
fn main() {
use sqlx::PgPool;

pub async fn get_user_by_id(pool: &PgPool, id: i64) -> Result<Option<User>, sqlx::Error> {
    // The query! macro:
    // 1. Connects to DATABASE_URL at compile time
    // 2. Sends this query to Postgres for analysis
    // 3. Infers the return type of each column
    // 4. Returns an anonymous struct with typed fields
    let row = sqlx::query!(
        "SELECT id, email, name, created_at FROM users WHERE id = $1",
        id
    )
    .fetch_optional(pool)
    .await?;

    Ok(row.map(|r| User {
        id: r.id,
        email: r.email,
        name: r.name,
        created_at: r.created_at,
    }))
}
}

The result of query! is an anonymous struct whose fields match your SELECT columns. The types come from Postgres: BIGINTi64, TEXTString, TIMESTAMPTZtime::OffsetDateTime (with the time feature), UUIDuuid::Uuid.

Nullable columns become Option<T>. If Postgres says a column can be NULL, the field is optional. If it can't be NULL, it's not. The compiler enforces this, which means you can't accidentally treat a nullable column as definitely-present.

query_as! for Existing Structs

More often, you have a struct you want to map to:

#![allow(unused)]
fn main() {
use serde::{Deserialize, Serialize};
use time::OffsetDateTime;
use uuid::Uuid;

#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub name: String,
    pub created_at: OffsetDateTime,
    pub updated_at: OffsetDateTime,
}

pub async fn get_user(pool: &PgPool, id: Uuid) -> Result<Option<User>, sqlx::Error> {
    sqlx::query_as!(
        User,
        "SELECT id, email, name, created_at, updated_at FROM users WHERE id = $1",
        id
    )
    .fetch_optional(pool)
    .await
}
}

query_as! maps the result set directly to your struct. The field names and types must match (you'll get a compile error if they don't, once the macro verifies against the database schema).

Fetch Methods

MethodReturnsUse when
.fetch_one(pool)Result<Row, Error>Exactly one row expected; error if zero or multiple
.fetch_optional(pool)Result<Option<Row>, Error>Zero or one row
.fetch_all(pool)Result<Vec<Row>, Error>All rows
.fetch(pool)Stream<Item = Result<Row, Error>>Streaming large result sets
.execute(pool)Result<PgQueryResult, Error>INSERT/UPDATE/DELETE

For large result sets, .fetch() returns a stream that yields rows one at a time, avoiding loading everything into memory:

#![allow(unused)]
fn main() {
use futures::TryStreamExt;

pub async fn process_all_users(pool: &PgPool) -> Result<(), sqlx::Error> {
    let mut rows = sqlx::query_as!(User, "SELECT id, email, name, created_at, updated_at FROM users")
        .fetch(pool);

    while let Some(user) = rows.try_next().await? {
        process_user(user).await;
    }
    Ok(())
}
}

Transactions

Transactions in SQLx are straightforward:

#![allow(unused)]
fn main() {
pub async fn transfer_credits(
    pool: &PgPool,
    from_user: Uuid,
    to_user: Uuid,
    amount: i64,
) -> Result<(), sqlx::Error> {
    let mut tx = pool.begin().await?;

    let balance = sqlx::query_scalar!(
        "SELECT balance FROM accounts WHERE user_id = $1 FOR UPDATE",
        from_user
    )
    .fetch_one(&mut *tx)
    .await?;

    if balance < amount {
        // tx is dropped here, which rolls back the transaction
        return Err(sqlx::Error::RowNotFound); // or your own error type
    }

    sqlx::query!(
        "UPDATE accounts SET balance = balance - $1 WHERE user_id = $2",
        amount,
        from_user
    )
    .execute(&mut *tx)
    .await?;

    sqlx::query!(
        "UPDATE accounts SET balance = balance + $1 WHERE user_id = $2",
        amount,
        to_user
    )
    .execute(&mut *tx)
    .await?;

    tx.commit().await?;
    Ok(())
}
}

When tx is dropped without commit(), SQLx rolls back. This is the correct default. Note &mut *tx — you dereference the transaction to get to the underlying connection.

Migrations

SQLx has a migration system built in. Migrations are numbered SQL files:

migrations/
  20240101000000_create_users.sql
  20240102000000_add_email_index.sql

Each file:

-- 20240101000000_create_users.sql
CREATE TABLE users (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email       TEXT NOT NULL UNIQUE,
    name        TEXT NOT NULL,
    password_hash TEXT NOT NULL,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX users_email_idx ON users (email);

Install the CLI:

cargo install sqlx-cli --no-default-features --features rustls,postgres

Run migrations:

export DATABASE_URL="postgres://localhost/myapp"
sqlx migrate run

Or run them programmatically at startup:

pub async fn run_migrations(pool: &PgPool) -> Result<(), sqlx::migrate::MigrateError> {
    sqlx::migrate!("./migrations").run(pool).await
}

#[tokio::main]
async fn main() {
    let pool = create_pool(&std::env::var("DATABASE_URL").unwrap()).await.unwrap();
    run_migrations(&pool).await.expect("Migration failed");
    // start server...
}

sqlx::migrate! includes the SQL files in your binary at compile time. You get a self-contained executable that can migrate its own database on startup, no external tools needed.

Compile-Time Verification Setup

For the query! macros to work, you need DATABASE_URL set when you run cargo build or cargo check:

export DATABASE_URL="postgres://postgres:password@localhost/myapp_dev"
cargo build

For CI and environments without a database, SQLx has an offline mode. After setting up your database and running:

cargo sqlx prepare

SQLx writes query metadata to a .sqlx/ directory that you commit to the repository. CI can then build without a live database:

SQLX_OFFLINE=true cargo build

This is the standard pattern: offline data in CI, live database in development.

Integrating with Axum

Here's a complete CRUD example for users:

#![allow(unused)]
fn main() {
use axum::{
    extract::{Path, State},
    http::StatusCode,
    response::Json,
    routing::{get, post},
    Router,
};
use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use uuid::Uuid;
use time::OffsetDateTime;

#[derive(Debug, Serialize, sqlx::FromRow)]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub name: String,
    pub created_at: OffsetDateTime,
}

#[derive(Deserialize)]
pub struct CreateUser {
    pub email: String,
    pub name: String,
}

pub fn user_router() -> Router<PgPool> {
    Router::new()
        .route("/users", get(list_users).post(create_user))
        .route("/users/:id", get(get_user))
}

async fn list_users(State(pool): State<PgPool>) -> Result<Json<Vec<User>>, StatusCode> {
    sqlx::query_as!(
        User,
        "SELECT id, email, name, created_at FROM users ORDER BY created_at DESC"
    )
    .fetch_all(&pool)
    .await
    .map(Json)
    .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)
}

async fn get_user(
    State(pool): State<PgPool>,
    Path(id): Path<Uuid>,
) -> Result<Json<User>, StatusCode> {
    sqlx::query_as!(
        User,
        "SELECT id, email, name, created_at FROM users WHERE id = $1",
        id
    )
    .fetch_optional(&pool)
    .await
    .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
    .map(Json)
    .ok_or(StatusCode::NOT_FOUND)
}

async fn create_user(
    State(pool): State<PgPool>,
    Json(payload): Json<CreateUser>,
) -> Result<(StatusCode, Json<User>), StatusCode> {
    let user = sqlx::query_as!(
        User,
        r#"
        INSERT INTO users (email, name)
        VALUES ($1, $2)
        RETURNING id, email, name, created_at
        "#,
        payload.email,
        payload.name
    )
    .fetch_one(&pool)
    .await
    .map_err(|e| {
        if let sqlx::Error::Database(db_err) = &e {
            // PostgreSQL unique constraint violation
            if db_err.constraint().is_some() {
                return StatusCode::CONFLICT;
            }
        }
        StatusCode::INTERNAL_SERVER_ERROR
    })?;

    Ok((StatusCode::CREATED, Json(user)))
}
}

Note the RETURNING clause in the INSERT — this is standard PostgreSQL and lets you get the full created row without a second query. SQLx handles it cleanly with fetch_one.

What Goes Wrong

DATABASE_URL not set: The query! macro fails at compile time with a confusing error. Set SQLX_OFFLINE=true or set the URL.

Connection pool exhausted: All max_connections are in use and acquire_timeout triggers. Your request fails with a timeout error. Monitor pool utilization. Consider whether slow queries are holding connections for too long.

Schema drift: Your Rust types no longer match the actual database schema. Run migrations. The compile-time checking only works against the schema that existed when you last ran cargo sqlx prepare or compiled against a live database.

Missing sqlx::FromRow derive: query_as! requires the target struct to implement FromRow, or you need to use the query! macro and map manually.

The next chapter covers authentication — including storing and verifying password hashes, which is the first place you'll need to reach for spawn_blocking.