PostgreSQL Performance Tuning
PostgreSQL Performance Tuning – A Practical Guide for DBAs
PostgreSQL is powerful, reliable, and open-source — but default configurations are not optimized for performance.
That’s where PostgreSQL performance tuning becomes essential.
If your queries are slowing down or your system struggles under load, tuning a few key parameters can drastically boost speed and efficiency.
Here are some key areas to focus on:
Query Optimization
- Use
EXPLAIN
andEXPLAIN ANALYZE
to understand query execution plans. - Avoid
SELECT *
— fetch only required columns. - Review slow queries from
pg_stat_statements
.
Indexing Strategy
- Create indexes on frequently filtered or joined columns.
- Use
GIN
andGiST
indexes for full-text and JSONB searches. - Drop unused or duplicate indexes — they slow down writes!
Memory Configuration
Tune parameters like shared_buffers
, work_mem
, maintenance_work_mem
, and effective_cache_size
.
Vacuum & Analyze
- Regularly run
VACUUM
andANALYZE
to remove dead tuples and update stats. - Enable
autovacuum
but fine-tune thresholds for high-write tables.
Connection Management
- Use connection pooling (via
pgBouncer
orPgpool-II
) to handle high traffic efficiently. - Tune
max_connections
wisely — more isn’t always better.
PostgreSQL Performance Tuning Starting Points
1 . Configuration & Authentication
max_connections
Set to GREATEST(4 × CPU cores, 100)
for balanced concurrency. Avoid setting too high to prevent memory waste — use a pooler like pgBouncer
for extra connections.
2 . Resource Usage
shared_buffers
Start with LEAST(RAM/2, 10GB)
. This defines PostgreSQL’s cache size — balancing it with OS caching ensures stable performance.
work_mem
Use ((Total RAM - shared_buffers)/(16 × CPU cores))
as a base. Too high a value can exhaust memory since multiple nodes may use it per query.
maintenance_work_mem
Used for maintenance tasks like VACUUM and CREATE INDEX. Set around 1GB to speed up heavy operations safely.
autovacuum_work_mem
Memory used by autovacuum workers. Setting around 8MB lets workers handle ~1.4M dead tuples efficiently.
effective_io_concurrency
Helps optimize read-ahead on disk operations. For SSDs, use 200 to maximize parallel I/O efficiency.
3 . Write-Ahead Log (WAL)
wal_compression
Enable (on
) to reduce I/O during heavy write operations. It compresses WAL data for better disk efficiency.
wal_log_hints
Set to on — required for tools like pg_rewind
and recovery operations.
wal_buffers
Set to 64MB. Provides sufficient memory for WAL writes before flushing to disk, improving throughput.
checkpoint_timeout
Set to at least 15 minutes. Longer timeouts reduce I/O load but increase crash recovery time.
checkpoint_completion_target
Set to 0.9 to spread checkpoint writes evenly and avoid I/O spikes.
max_wal_size
Use ½–⅔ of WAL disk space. Ensures predictable checkpointing and prevents disk overflow.
archive_mode
Keep on to enable WAL archiving and future recovery options.
archive_command
Set a placeholder (":"
) until a real archive process is configured — avoids errors in WAL recycling.
4 . Query Optimization
random_page_cost
Defines the cost of random reads. Set to 1.1 for SSDs, keeping sequential and index scans balanced.
effective_cache_size
Estimate available cache memory — typically 0.75 × total RAM. Helps PostgreSQL make smarter planner decisions.
cpu_tuple_cost
Reflects CPU effort per row processed. Set to 0.03 for a more realistic cost model than the default.
5 . Reporting & Logging
logging_collector
Turn on to capture PostgreSQL logs into files — vital for diagnostics and audits.
log_directory
Store logs outside the data directory for easier management and cleaner backups.
log_checkpoints
Enable to monitor checkpoint activity and verify it runs as expected.
log_line_prefix
Use '%m [%p-%l] %u@%d app=%a '
for clear, informative log entries (don’t forget the trailing space!).
log_lock_waits
Enable (on
) to identify slow queries caused by lock contention.
log_statement
Set to ddl to log schema-level changes and maintain an audit trail.
log_temp_files
Set to 0 to track all temporary files — a sign that work_mem
may need adjustment.
timed_statistics (EPAS)
Enable to collect runtime performance metrics for advanced diagnostics.
6 . Autovacuum
log_autovacuum_min_duration
Set to 0 to log all autovacuum activities — essential for tuning and monitoring.
autovacuum_max_workers
Increase from 3 → 5 to allow parallel vacuuming across multiple tables.
autovacuum_vacuum_cost_limit
Set to 3000 to permit more I/O per autovacuum cycle and reduce table bloat.
7 . Client Connection Defaults
idle_in_transaction_session_timeout
Terminate idle transactions after 10 minutes to prevent lock blocking.
lc_messages
Set to C to produce English logs — ensures compatibility with log analyzers.
shared_preload_libraries
Enable pg_stat_statements
for detailed query-level performance insights.
Query optimization
some practical ways to optimize PostgreSQL queries
1 . Use Indexes
Indexes speed up data retrieval significantly.
Example:
If you frequently search for users by email:
CREATE INDEX users_email_idx ON users (email);
2 . Use EXPLAIN to Analyze Queries
Before optimizing, understand how your query runs.
EXPLAIN SELECT COUNT(*) FROM table1 JOIN table2 ON table1.id = table2.table1_id;
EXPLAIN
helps you inspect the execution plan and detect slow operations like sequential scans or nested loops.
3 . Avoid Wildcards at the Beginning of Strings
Using %
at the start of a LIKE
pattern prevents index usage.
❌ Slow query:
SELECT * FROM users WHERE email LIKE '%@example.com';
SELECT * FROM users WHERE email LIKE '@example.com%';
4 . Limit Returned Rows
SELECT * FROM users LIMIT 100;
Use LIMIT
or pagination to reduce load on both the database and the application.
5 .Use Appropriate Data Types
Choosing the right data type improves performance and memory usage.
Example:
ALTER TABLE users ALTER COLUMN age TYPE smallint;
Smaller data types mean less memory per row — leading to faster reads and writes.
6. Optimize Subqueries
SELECT o.* FROM orders o JOIN customers c ON o.customer_id = c.id WHERE c.city = 'New York';
7 . Use Prepared Statements
PREPARE get_users_by_name (text) AS SELECT * FROM users WHERE name = $1; EXECUTE get_users_by_name('John'); EXECUTE get_users_by_name('Jane');
8 . Enable Connection Pooling
Every new database connection adds overhead. Use connection pooling tools like pgBouncer or Pgpool-II to reuse connections.
📘 Example connection string:
PostgreSQL: postgres://user:password@hostname:5432/db pgBouncer: postgres://user:password@hostname:6432/db
9 . Regularly Analyze and Vacuum Tables
ANALYZE mytable; VACUUM mytable;
-
ANALYZE
updates statistics for better query planning. -
VACUUM
reclaims dead tuples and reduces table bloat.
Conclusion
At Learnomate Technologies, we’re here to support you every step of the way with top-notch training in PostgreSQL DBA and more.
For more detailed insights and tutorials, do check out our YouTube channel: www.youtube.com/@learnomate, where we regularly share practical tips and deep dives into essential database topics. And if you’re serious about mastering PostgreSQL DBA, head over to our website for our full training program: learnomate.org/training/postgresql-training/.
I’d love to connect with you, so don’t forget to follow my LinkedIn: https://www.linkedin.com/in/ankushthavali/. If you’re eager to read more about various technologies, explore our blog page here: https://learnomate.org/blogs/. Happy learning, and remember—tuning and training go hand-in-hand for database success!