Postgres table partitioning: how I manage 500M rows without query slowdowns
Our events table hit 200 million rows and queries were getting slow even with indexes. The VACUUM runs took hours and blocked other operations. I partitioned the table by month and suddenly queries that previously took 8 seconds took 200ms — because Postgres only scanned the relevant partition instead of the entire table. Here is the complete partitioning strategy.
Declarative partitioning (Postgres 10+)
-- Create the partitioned parent table
CREATE TABLE events (
id BIGSERIAL,
user_id UUID NOT NULL,
event_type TEXT NOT NULL,
payload JSONB,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE events_2026_01 PARTITION OF events
FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE events_2026_02 PARTITION OF events
FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
CREATE TABLE events_2026_03 PARTITION OF events
FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');
CREATE TABLE events_2026_04 PARTITION OF events
FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
-- Indexes: create on the parent, automatically applied to all partitions
CREATE INDEX ON events (user_id, created_at DESC);
CREATE INDEX ON events (created_at DESC);
Automatic partition creation with pg_partman
-- pg_partman automates partition creation and maintenance
CREATE EXTENSION IF NOT EXISTS pg_partman;
-- Set up automated monthly partitioning
SELECT partman.create_parent(
p_parent_table := 'public.events',
p_control := 'created_at',
p_type := 'range',
p_interval := '1 month',
p_premake := 3 -- Create 3 future partitions in advance
);
-- Configure retention (keep 12 months of data)
UPDATE partman.part_config
SET retention = '12 months',
retention_keep_table = false -- Actually DROP old partitions
WHERE parent_table = 'public.events';
-- Run maintenance (add to a cron job)
SELECT partman.run_maintenance();
Partition pruning: the key performance benefit
-- This query scans ONLY the April 2026 partition (1/24 of the data)
-- Postgres knows to skip all other partitions (partition pruning)
SELECT COUNT(*), SUM(payload->>'value')::numeric
FROM events
WHERE created_at BETWEEN '2026-04-01' AND '2026-04-30'
AND event_type = 'purchase';
-- EXPLAIN to verify partition pruning is working
EXPLAIN (ANALYZE)
SELECT * FROM events
WHERE created_at > '2026-04-01';
-- Should show: Append with only the relevant partitions listed
List partitioning for categorical data
-- Partition by region for geo-distributed data
CREATE TABLE orders (
id UUID DEFAULT gen_random_uuid(),
user_id UUID NOT NULL,
region TEXT NOT NULL,
amount_cents INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) PARTITION BY LIST (region);
CREATE TABLE orders_us PARTITION OF orders
FOR VALUES IN ('us-east', 'us-west', 'us-central');
CREATE TABLE orders_eu PARTITION OF orders
FOR VALUES IN ('eu-west', 'eu-central', 'eu-north');
CREATE TABLE orders_apac PARTITION OF orders
FOR VALUES IN ('ap-south', 'ap-east', 'ap-southeast');
-- Default partition for any unmapped regions
CREATE TABLE orders_other PARTITION OF orders DEFAULT;
Dropping old partitions: instant data deletion
The most operationally powerful feature of partitioning: dropping a partition (deleting a year of events) is instant because it just removes a file, not individual rows:
-- Detach from parent (fast, makes partition a standalone table)
ALTER TABLE events DETACH PARTITION events_2024_01;
-- Drop it (instant — just removes the file)
DROP TABLE events_2024_01;
-- vs the OLD way (takes hours, blocks):
-- DELETE FROM events WHERE created_at < '2024-02-01';
-- VACUUM events; -- Reclaim space (also takes hours)
Migrating an existing large table to partitioned
-- Zero-downtime migration strategy:
-- 1. Create new partitioned table
CREATE TABLE events_partitioned (LIKE events INCLUDING ALL)
PARTITION BY RANGE (created_at);
-- Create partitions...
-- 2. Copy data in batches (outside peak hours)
INSERT INTO events_partitioned
SELECT * FROM events
WHERE created_at BETWEEN '2025-01-01' AND '2025-02-01';
-- Repeat for each month
-- 3. Switch with minimal downtime
BEGIN;
ALTER TABLE events RENAME TO events_old;
ALTER TABLE events_partitioned RENAME TO events;
COMMIT;
-- 4. Drop old table after verifying
DROP TABLE events_old;
Partitioning is worth the complexity when: your table is over 100GB, your queries nearly always include a filter on the partition key, and you need to delete old data regularly. For smaller tables or tables without clear partition keys, the added complexity is not justified.