Docker Compose profiles: one file for dev, test, and local prod environments
I had three docker-compose files: docker-compose.yml, docker-compose.test.yml, and docker-compose.prod.yml. They were 70% duplicated and constantly drifting out of sync. Then I discovered Compose profiles — a single file where services declare which profiles they belong to, and you activate profiles at startup. Everything collapsed into one well-organized file.
The complete docker-compose.yml with profiles
version: '3.9'
services:
# Core services — always run (no profile)
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
# App service — always run
api:
build:
context: .
target: production # Uses multi-stage build production target
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://postgres:secret@postgres:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
postgres: { condition: service_healthy }
redis: { condition: service_healthy }
env_file:
- .env.local
# dev profile: hot reload, debug tools
api-dev:
build:
context: .
target: development # Dev target with ts-node, nodemon
profiles: [dev]
volumes:
- ./src:/app/src # Hot reload
- /app/node_modules
ports:
- "3000:3000"
- "9229:9229" # Node debugger
environment:
NODE_ENV: development
DATABASE_URL: postgresql://postgres:secret@postgres:5432/myapp
depends_on:
postgres: { condition: service_healthy }
# dev profile: database admin UI
adminer:
image: adminer
profiles: [dev]
ports:
- "8080:8080"
depends_on:
- postgres
# test profile: separate test database
postgres-test:
image: postgres:16-alpine
profiles: [test]
environment:
POSTGRES_DB: myapp_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
tmpfs:
- /var/lib/postgresql/data # In-memory for speed
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 2s
timeout: 2s
retries: 10
test-runner:
build:
context: .
target: development
profiles: [test]
command: ["npm", "run", "test:integration"]
environment:
DATABASE_URL: postgresql://postgres:secret@postgres-test:5432/myapp_test
depends_on:
postgres-test: { condition: service_healthy }
# monitoring profile: observability stack
prometheus:
image: prom/prometheus
profiles: [monitoring]
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
profiles: [monitoring]
ports:
- "3001:3000"
depends_on:
- prometheus
volumes:
postgres_data:
Usage commands
# Development: start with hot reload + adminer
docker compose --profile dev up
# Run tests: use test postgres
docker compose --profile test up --abort-on-container-exit
# Production simulation: just the core services
docker compose up # No --profile flag = no profile services
# Development + monitoring
docker compose --profile dev --profile monitoring up
# Stop everything including profile services
docker compose --profile dev --profile test --profile monitoring down
Convenience Makefile
.PHONY: dev test prod logs down clean
dev:
docker compose --profile dev up --build
dev-bg:
docker compose --profile dev up -d --build
test:
docker compose --profile test up --build --abort-on-container-exit
docker compose --profile test down
prod:
docker compose up --build
logs:
docker compose logs -f api
down:
docker compose --profile dev --profile test --profile monitoring down
clean:
docker compose down -v --remove-orphans # Also removes volumes
The .env.local pattern
# .env.local.example (committed to git)
STRIPE_API_KEY=sk_test_placeholder
SENDGRID_API_KEY=placeholder
JWT_SECRET=dev-secret-change-in-prod
# .env.local (not committed — each dev fills in their own)
STRIPE_API_KEY=sk_test_your_actual_key
SENDGRID_API_KEY=your_actual_key
JWT_SECRET=your-local-secret
The profiles pattern eliminated my three docker-compose files and their drift. When I add a new service, it gets a profile declaration once — and is automatically available in the right environments. The test database uses tmpfs (in-memory) for fast test execution. The monitoring profile is opt-in so developers who do not need it do not run the overhead of Prometheus + Grafana locally.