PostgreSQL is remarkably capable out of the box, and its defaults are conservative by design โ chosen to run on a 1990s workstation without crashing. In 2026, on a machine with 16+ GB of RAM, those defaults leave significant performance on the table. This post covers the settings, patterns, and tools that make the most practical difference for developers running Postgres in production, without requiring you to become a full-time DBA.
The postgresql.conf settings that actually matter
The default Postgres configuration is famously cautious. These are the knobs worth adjusting first, in rough order of impact:
# postgresql.conf โ tuned for a 16GB RAM production server
# Memory
shared_buffers = 4GB # 25% of RAM. Postgres's own cache.
effective_cache_size = 12GB # 75% of RAM. Tells query planner how much OS cache is available.
work_mem = 64MB # Per sort/hash operation, per connection. Set conservatively.
maintenance_work_mem = 1GB # For VACUUM, CREATE INDEX, etc. Can be higher.
# Write-ahead log
wal_buffers = 64MB # Default auto-tune is usually fine; set explicitly.
checkpoint_completion_target = 0.9 # Spread checkpoint I/O over 90% of the interval.
max_wal_size = 4GB # Allow larger WAL before triggering checkpoint.
# Connections
max_connections = 100 # Lower than you think. Use PgBouncer instead.
# Parallelism
max_parallel_workers_per_gather = 4 # Number of CPUs to use for parallel queries.
max_worker_processes = 8
max_parallel_workers = 8
# Logging (add to your slow query log)
log_min_duration_statement = 100 # Log queries taking >100ms.
log_checkpoints = on
log_lock_waits = on
On work_mem: it's easy to set this too high. The value applies per sort or hash operation, and a single complex query can have multiple of these running simultaneously, multiplied by the number of connections. At 100 connections, a work_mem = 512MB means you could theoretically use 50GB of RAM for sort buffers alone. Set it conservatively in the config file and use SET LOCAL work_mem = '512MB' in sessions that need more for batch jobs.
Connection pooling with PgBouncer: non-optional at scale
PostgreSQL creates a new OS process for every connection. Each process consumes roughly 5โ10MB of RAM and has startup overhead. At 500+ concurrent connections, this becomes a serious problem โ you'll exhaust memory before you exhaust query capacity, and connection overhead dominates your latency profile.
PgBouncer sits between your application and Postgres, maintaining a small pool of real Postgres connections and multiplexing many application connections onto them. For most web applications, transaction pooling mode is correct:
# pgbouncer.ini
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction # Connection released after each transaction
max_client_conn = 1000 # App-facing connections
default_pool_size = 25 # Real Postgres connections per database/user pair
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
log_connections = 0 # Too noisy for production
Transaction pooling caveat: In transaction pooling mode, session-level state (prepared statements set with PREPARE, advisory locks, SET LOCAL variables, LISTEN/NOTIFY) does not persist across transactions because you may get a different backend connection each time. If your application relies on these, use session pooling instead, or disable prepared statements in your driver.
Indexing: the one thing that always pays off
Most query performance problems are index problems. Three patterns worth knowing beyond the basic CREATE INDEX:
Partial indexes
If you're frequently querying a subset of rows, a partial index covers only that subset and is a fraction of the size of a full index:
-- Only index active users (99% of queries filter on this)
CREATE INDEX idx_users_email_active ON users(email) WHERE deleted_at IS NULL;
-- Only index unprocessed jobs
CREATE INDEX idx_jobs_pending ON jobs(created_at) WHERE status = 'pending';
-- The planner will use this automatically when the WHERE clause matches
Covering indexes (INCLUDE)
An index-only scan avoids touching the heap (the table) entirely if the index includes all columns the query needs. Use INCLUDE to add non-key columns to the index:
-- Query: SELECT email, name FROM users WHERE org_id = $1 ORDER BY created_at
-- Without INCLUDE: index scan on org_id, then heap fetch for email and name
CREATE INDEX idx_users_org_covering
ON users(org_id, created_at)
INCLUDE (email, name);
-- Now: index-only scan, no heap access
Expression indexes
If you filter on a computed value, index the expression not the column:
-- Query: WHERE lower(email) = lower($1) (case-insensitive lookup)
CREATE INDEX idx_users_email_lower ON users(lower(email));
-- Query: WHERE date_trunc('day', created_at) = $1
CREATE INDEX idx_orders_day ON orders(date_trunc('day', created_at));
EXPLAIN ANALYZE: reading the output
Run EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) on any query you're tuning. The key numbers to look at:
- actual rows vs estimated rows: If these differ by more than 10ร, the planner has bad statistics. Run
ANALYZE tablenameto refresh them, or increasedefault_statistics_targetfor columns with high cardinality. - Seq Scan on large tables: Almost always means a missing index. Check the filter condition and create an appropriate index.
- Buffers: shared hit vs read: High
readrelative tohitmeans data isn't in shared_buffers โ consider increasingshared_buffersor this is just a cold-cache situation. - Sort Method: external merge Disk: The sort spilled to disk. Increase
work_memfor this query's session.
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.id, u.email, count(o.id) as order_count
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.org_id = 42 AND u.deleted_at IS NULL
GROUP BY u.id, u.email
ORDER BY order_count DESC
LIMIT 20;
VACUUM and autovacuum
Postgres uses MVCC โ old row versions aren't deleted immediately, they're marked dead and cleaned up by VACUUM. Autovacuum handles this automatically, but its defaults are tuned for small databases. For tables that receive high update/delete rates, you'll want to tune autovacuum per-table:
-- For a high-churn table (e.g., a job queue or event log)
ALTER TABLE jobs SET (
autovacuum_vacuum_scale_factor = 0.01, -- vacuum when 1% of rows are dead (default 20%)
autovacuum_analyze_scale_factor = 0.005,
autovacuum_vacuum_cost_delay = 2 -- ms between vacuum I/O bursts (lower = faster)
);
Monitor bloat with pg_stat_user_tables โ specifically n_dead_tup relative to n_live_tup. If dead tuples grow without being collected, autovacuum isn't keeping up and you'll see table bloat, increasing I/O and degrading query times.
pganalyze and the free tool pgBadger (log analysis) plus pg_activity (live monitoring) cover 90% of routine Postgres operational visibility without requiring a commercial monitoring solution. In our Studio deployments we export pg_stat_* views to Prometheus via postgres_exporter and alert on replication lag, bloat ratio, and long-running transactions.
One thing many developers skip: connection health
Set these in your application's connection pool or database driver, not just in Postgres. They prevent stale connections from causing mysterious failures after a database restart or network interruption:
# In your connection string or ORM config:
connect_timeout = 5 # Don't wait forever to connect
statement_timeout = 30000 # Kill queries running longer than 30s (ms)
lock_timeout = 10000 # Fail fast on lock contention rather than queueing
idle_in_transaction_session_timeout = 60000 # Kill sessions stuck in open transactions
The idle_in_transaction_session_timeout is particularly important โ a session that opened a transaction and then stalled (due to application bug, network issue, or long-running code) holds locks. Without this timeout, those locks can accumulate and cause cascading slowdowns across your entire application.