PgBouncer saved my Postgres from connection exhaustion — here's how to set it up
← Back
April 4, 2026Database6 min read

PgBouncer saved my Postgres from connection exhaustion — here's how to set it up

Published April 4, 20266 min read

We scaled to 8 app server replicas and started seeing "remaining connection slots are reserved for non-replication superuser connections." Postgres was out of connections. The fix was not increasing max_connections (which wastes memory) — it was adding PgBouncer. Here is the exact setup that took us from a connection limit of 200 to supporting 1,000+ concurrent app connections with 50 actual Postgres connections.

Why Postgres connections are expensive

Each Postgres connection spawns a backend process (~5-10MB of memory). With max_connections=100 and each connection using 10MB, that is 1GB just for connections — before any actual query work. More connections also increase lock contention and checkpoint overhead.

PgBouncer sits between your app and Postgres. Your app talks to PgBouncer with many connections. PgBouncer maintains a small pool of actual Postgres connections and routes app requests through them.

Three pooling modes

Session pooling:     Connection held for entire session (minimal benefit)
Transaction pooling: Connection held only during a transaction (best for web apps)
Statement pooling:   Connection held only during a statement (rare, limited features)

Use transaction pooling for web applications. App holds the connection only during database work, not for the entire request lifecycle.

Docker Compose setup

yaml
version: '3.8'
services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
    # Increase max_connections slightly (not the main fix)
    command: postgres -c max_connections=200

  pgbouncer:
    image: edoburu/pgbouncer:1.22.0
    environment:
      DB_HOST: postgres
      DB_PORT: 5432
      DB_USER: myapp_user
      DB_PASSWORD: apppassword
      DB_NAME: myapp
      POOL_MODE: transaction
      MAX_CLIENT_CONN: 1000       # Max app connections to PgBouncer
      DEFAULT_POOL_SIZE: 25       # Actual Postgres connections per user/db pair
      MIN_POOL_SIZE: 5
      RESERVE_POOL_SIZE: 5        # Emergency connections
      SERVER_IDLE_TIMEOUT: 600    # Return idle Postgres connections after 10 min
    ports:
      - "5432:5432"  # App connects to PgBouncer on port 5432
    depends_on:
      - postgres

Production pgbouncer.ini

ini
[databases]
; Application database — each user gets pool_size connections
myapp = host=postgres port=5432 dbname=myapp

[pgbouncer]
listen_addr = *
listen_port = 5432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

; Pool configuration
pool_mode = transaction
max_client_conn = 1000      ; App can open 1000 connections to PgBouncer
default_pool_size = 25      ; PgBouncer opens max 25 connections to Postgres
min_pool_size = 5           ; Keep 5 connections warm
reserve_pool_size = 5       ; Reserve for emergencies

; Timeouts
server_idle_timeout = 600   ; Close idle Postgres connections after 10 min
client_idle_timeout = 0     ; Don't close idle app connections (0 = disabled)
query_timeout = 0           ; No query timeout (handle in app)

; Logging
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1

; Performance
server_reset_query = DISCARD ALL  ; Required for transaction pooling

Limitations of transaction pooling

Transaction pooling means you cannot use session-scoped features across multiple transactions:

python
## DOES NOT WORK with transaction pooling:
# SET LOCAL and session variables don't persist between transactions

# WORKS: Always use explicit transaction blocks
async with conn.transaction():
    await conn.execute("SET LOCAL my_setting = 'value'")
    await conn.execute("SELECT * FROM my_table")
    # Setting only valid within this transaction

# DOES NOT WORK:
await conn.execute("SET app.current_user = $1", user_id)  # Session variable
# Next transaction gets a different Postgres connection — setting is gone

# WORKS INSTEAD:
# Pass the value in each query or use a transaction-scoped SET LOCAL

Monitoring PgBouncer

sql
-- Connect to PgBouncer admin console
-- psql -h localhost -p 5432 -U pgbouncer pgbouncer

SHOW POOLS;
-- Shows: database, user, cl_active, cl_waiting, sv_active, sv_idle, sv_used

-- cl_waiting > 0 means app is waiting for a Postgres connection
-- If this is consistently > 0, increase default_pool_size

SHOW STATS;
-- Shows: total_requests, total_wait_time, avg_wait_time

SHOW CLIENTS;
-- Lists all currently connected app clients

The numbers that drove the decision

Before PgBouncer:
  8 app servers × 25 pool size = 200 Postgres connections
  Hit max_connections at moderate load

After PgBouncer:
  8 app servers × 100 app pool = 800 app->PgBouncer connections
  PgBouncer → Postgres: 25 connections total (3x reduction)
  max_connections never reached
  Memory freed: ~1.75GB (175 fewer Postgres backend processes)

PgBouncer is lightweight (single-process, ~50MB memory) and adds less than 1ms of latency. For any production application with more than a handful of app servers, it is essentially required.

Share this
← All Posts6 min read