Postgres advisory locks: distributed locking without Redis
← Back
April 4, 2026Database6 min read

Postgres advisory locks: distributed locking without Redis

Published April 4, 20266 min read

I was adding distributed locking to prevent duplicate cron job execution and immediately reached for Redis. Then I discovered Postgres advisory locks — they provide distributed mutual exclusion using the Postgres connection I already had. No additional infrastructure, no Redis dependency, and they automatically release on connection close. Here is the pattern.

What advisory locks are

Advisory locks are user-defined locks that Postgres manages. Unlike row locks, they do not automatically lock any data — you define what each lock ID means. They are useful for:

  • Preventing duplicate cron job execution across multiple app instances
  • Mutually exclusive resource acquisition (e.g., only one worker processes a batch)
  • Sequential processing of operations that must not interleave

Session-level vs transaction-level

sql
-- Session-level: persists until explicitly released or connection closes
-- Best for: long-running jobs, cron exclusion
SELECT pg_advisory_lock(12345);
-- ... do work ...
SELECT pg_advisory_unlock(12345);

-- Transaction-level: automatically released when transaction ends
-- Best for: transactional operations where you need the lock only during the transaction
SELECT pg_advisory_xact_lock(12345);
-- Lock released automatically when transaction commits or rolls back

Non-blocking try lock (most useful pattern)

python
import asyncpg
import hashlib
from contextlib import asynccontextmanager

def lock_id(name: str) -> int:
    """Convert a string lock name to a consistent int for pg_advisory_lock."""
    return int(hashlib.md5(name.encode()).hexdigest()[:15], 16) % (2**31)

@asynccontextmanager
async def advisory_lock(
    conn: asyncpg.Connection,
    lock_name: str,
    blocking: bool = False,
):
    """
    Acquire an advisory lock by name.
    
    If blocking=False (default), returns immediately if lock is unavailable.
    Raises LockUnavailable if lock cannot be acquired.
    """
    lid = lock_id(lock_name)
    
    if blocking:
        await conn.execute("SELECT pg_advisory_lock($1)", lid)
        acquired = True
    else:
        acquired = await conn.fetchval(
            "SELECT pg_try_advisory_lock($1)", lid
        )
    
    if not acquired:
        raise LockUnavailable(f"Could not acquire lock: {lock_name}")
    
    try:
        yield
    finally:
        await conn.execute("SELECT pg_advisory_unlock($1)", lid)


class LockUnavailable(Exception):
    pass

Preventing duplicate cron jobs

python
import asyncio
import asyncpg

async def run_daily_report_job(pool: asyncpg.Pool) -> None:
    """
    Daily report generation — should only run once across all app instances.
    Uses advisory lock to prevent duplicate execution.
    """
    async with pool.acquire() as conn:
        try:
            async with advisory_lock(conn, "daily_report_job", blocking=False):
                print("Acquired lock — running daily report job")
                await generate_daily_report()
                print("Daily report job complete")
        except LockUnavailable:
            print("Another instance is running the daily report job — skipping")

# On each app instance, schedule this to run at the same time
# Only one will acquire the lock; others will skip gracefully
import asyncio
async def main():
    pool = await asyncpg.create_pool(dsn=DATABASE_URL)
    while True:
        await run_daily_report_job(pool)
        await asyncio.sleep(3600)  # Check hourly

Transaction-level lock for sequential processing

python
async def process_payment_order(
    conn: asyncpg.Connection,
    order_id: str,
) -> None:
    """
    Process a payment in a way that prevents concurrent processing
    of the same order. Uses transaction-level advisory lock.
    """
    lid = lock_id(f"payment_order_{order_id}")
    
    async with conn.transaction():
        # Try to acquire lock for this specific order
        acquired = await conn.fetchval(
            "SELECT pg_try_advisory_xact_lock($1)", lid
        )
        
        if not acquired:
            # Another process is handling this order — skip
            return
        
        # Check current status (lock held, so this is safe)
        order = await conn.fetchrow(
            "SELECT * FROM orders WHERE id = $1 FOR UPDATE",
            order_id
        )
        
        if order["status"] != "pending":
            return  # Already processed
        
        # Process the payment
        await charge_card(order)
        await conn.execute(
            "UPDATE orders SET status = 'completed' WHERE id = $1",
            order_id
        )
        # Lock releases automatically when transaction commits

Viewing active advisory locks

sql
-- See who holds which advisory locks
SELECT 
    pid,
    usename,
    classid,
    objid,
    mode,
    granted
FROM pg_locks
JOIN pg_stat_activity USING (pid)
WHERE locktype = 'advisory';

Advisory locks vs Redis locks

Redis locks (like Redlock) have one advantage: they are independent of your database connection. If your Postgres connection dies during a job, the advisory lock is automatically released — which is usually what you want, but in rare cases you might want the lock to outlive the connection (not possible with advisory locks).

For the common case of preventing duplicate cron execution and simple mutual exclusion, advisory locks are simpler, faster (no network round trip to a separate service), and require no additional infrastructure. I now reach for Redis locks only when I specifically need the lock to outlive the database connection.

Share this
← All Posts6 min read