Identify and optimize slow queries

It's never too soon to check the Query Statistics tab in the Aiven Console (or query pg_stat_statements in psql) to find which queries are taking a long time to run or are running particularly often. When you have identified slow queries you can inspect the query plan and execution using EXPLAIN ANALYZE to see whether you need to add any missing indexes or restructure your schema to improve the performance.

Note that too many database indexes on a table can also cause problems for write performance due to the overhead of maintaining them.

Handle an increase in database connections with connection pooling

When your application code scales horizontally to accommodate high load you might find that you inadvertently reach the connection limit for your plan. Each connection in PostgreSQL runs in a separate process and this makes them more expensive (than e.g. threads) in terms of interprocess communication and memory usage since each connection consumes a certain amount of RAM.

The most effective solution to support a large number of connections is to use Aiven connection pooling which is implemented using PgBouncer.

User-uploaded Image

Move read-only queries to standby nodes

If your service is running a business-* or premium-* plan then you have standby nodes available in a high availability setup. These are able to support read-only queries by direct connections to the Read-only replica URL to reduce the effect of slow queries on the primary node.

Move read-only queries to a remote read replica

We also offer an option to create a remote read replica service which can be setup in the same (or different) cloud and/or region which will provide a dedicated read only service that can be used to reduce the load on the master in case it is under heavy write load.

Did this answer your question?