Postgres UNLOGGED tables: 10x write speed for data you can afford to lose
← Back
April 4, 2026Database5 min read

Postgres UNLOGGED tables: 10x write speed for data you can afford to lose

Published April 4, 20265 min read

I was storing real-time analytics events in Postgres and the write throughput was bottlenecked by WAL (Write Ahead Log) overhead. A colleague mentioned UNLOGGED tables — tables that skip WAL entirely. The write speed improvement was dramatic, and the data I was storing was perfect for this pattern because I could always rebuild it from source events. Here is when and how to use them.

What UNLOGGED means

Normal Postgres tables write every change to the WAL before applying it. This guarantees durability — if the server crashes mid-write, Postgres can replay the WAL to recover. UNLOGGED tables skip the WAL entirely.

The trade-off: if Postgres crashes, unlogged table data is lost (the table is truncated on restart). Not suitable for: user data, orders, financial records, or anything that cannot be rebuilt.

Suitable for: sessions, caches, temporary aggregations, queues where you process and delete, rate limiting counters, analytics staging tables.

Creating unlogged tables

sql
-- Session storage
CREATE UNLOGGED TABLE sessions (
    id TEXT PRIMARY KEY,
    user_id UUID NOT NULL,
    data JSONB NOT NULL DEFAULT '{}',
    created_at TIMESTAMPTZ DEFAULT NOW(),
    expires_at TIMESTAMPTZ NOT NULL
);

CREATE INDEX idx_sessions_user ON sessions (user_id);
CREATE INDEX idx_sessions_expires ON sessions (expires_at);

-- Rate limiting counters (can be rebuilt from logs)
CREATE UNLOGGED TABLE rate_limit_counters (
    key TEXT NOT NULL,
    window_start TIMESTAMPTZ NOT NULL,
    count INTEGER NOT NULL DEFAULT 0,
    PRIMARY KEY (key, window_start)
);

-- Analytics staging (processed and deleted in batches)
CREATE UNLOGGED TABLE events_staging (
    id BIGSERIAL PRIMARY KEY,
    user_id UUID,
    event_type TEXT NOT NULL,
    payload JSONB,
    received_at TIMESTAMPTZ DEFAULT NOW()
);

Performance benchmark

sql
-- Benchmark: insert 100K rows
-- Regular table
CREATE TABLE bench_regular (id BIGSERIAL, data TEXT);
	iming
INSERT INTO bench_regular (data) 
SELECT md5(random()::text) FROM generate_series(1, 100000);
-- Time: ~850ms

-- Unlogged table  
CREATE UNLOGGED TABLE bench_unlogged (id BIGSERIAL, data TEXT);
	iming
INSERT INTO bench_unlogged (data)
SELECT md5(random()::text) FROM generate_series(1, 100000);
-- Time: ~85ms (10x faster!)

Converting an existing table

sql
-- Convert regular to unlogged
ALTER TABLE sessions SET UNLOGGED;

-- Convert back to regular (logged) if requirements change
ALTER TABLE sessions SET LOGGED;

Session management example

python
import asyncpg
import uuid
import json
from datetime import datetime, timedelta

async def create_session(pool: asyncpg.Pool, user_id: str, data: dict) -> str:
    session_id = str(uuid.uuid4())
    expires = datetime.utcnow() + timedelta(hours=24)
    
    async with pool.acquire() as conn:
        await conn.execute("""
            INSERT INTO sessions (id, user_id, data, expires_at)
            VALUES ($1, $2, $3, $4)
        """, session_id, user_id, json.dumps(data), expires)
    
    return session_id

async def get_session(pool: asyncpg.Pool, session_id: str) -> dict | None:
    async with pool.acquire() as conn:
        row = await conn.fetchrow("""
            SELECT user_id, data FROM sessions
            WHERE id = $1 AND expires_at > NOW()
        """, session_id)
    
    if not row:
        return None
    return {"user_id": str(row["user_id"]), **json.loads(row["data"])}

async def cleanup_expired_sessions(pool: asyncpg.Pool) -> int:
    """Run this as a periodic job."""
    async with pool.acquire() as conn:
        result = await conn.execute(
            "DELETE FROM sessions WHERE expires_at < NOW()"
        )
    return int(result.split()[-1])

Rate limiting with unlogged tables

sql
-- Efficient rate limiter using unlogged table
-- Returns true if request is allowed, false if rate limit exceeded
CREATE OR REPLACE FUNCTION check_rate_limit(
    p_key TEXT,
    p_max_requests INTEGER,
    p_window_seconds INTEGER
) RETURNS BOOLEAN AS $$
DECLARE
    v_window TIMESTAMPTZ;
    v_count INTEGER;
BEGIN
    v_window := date_trunc('second', NOW()) - 
                ((EXTRACT(EPOCH FROM NOW())::integer % p_window_seconds) * INTERVAL '1 second');
    
    INSERT INTO rate_limit_counters (key, window_start, count)
    VALUES (p_key, v_window, 1)
    ON CONFLICT (key, window_start) DO UPDATE
        SET count = rate_limit_counters.count + 1
    RETURNING count INTO v_count;
    
    RETURN v_count <= p_max_requests;
END;
$$ LANGUAGE plpgsql;

-- Usage: check_rate_limit('user_123_api', 100, 60) 
-- Returns true if under 100 requests per 60 seconds

The performance gain from UNLOGGED tables is most dramatic for write-heavy workloads. If you are reading more than writing, the benefit is smaller (reads are not WAL-logged either way). The right candidates are tables that are written frequently and where the cost of rebuilding the data on a crash is acceptable — typically a few minutes of lost session data or a few seconds of rate limit counter drift.

Share this
← All Posts5 min read