Si usas PgBouncer delante de PostgreSQL (y más aún si usas Odoo), SHOW POOLS; es el comando que te responde en 10 segundos:
¿hay cola?
¿cuántas conexiones reales están ocupadas?
¿estoy limitado por pool_size o por una DB lenta/lockeada?
¿qué pool exacto (db/user) está sufriendo?
PgBouncer lo define así: se crea una entrada de pool por cada par (database, user). PgBouncer
1) Cómo acceder (por si te falta el “contexto admin”)
Te conectas a la “base virtual” pgbouncer y ahí ejecutas los SHOW .... PgBouncer+1
psql "host=PGBOUNCER_HOST port=6432 dbname=pgbouncer user=tu_admin"
2) Ejemplo típico de salida
pgbouncer=# SHOW POOLS; database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | pool_mode ---------+--------+-----------+------------+----------+---------+---------+----------+---------+---------+---------- odoo | odoo | 35 | 4 | 30 | 10 | 0 | 0 | 0 | 2 | transaction
Cada fila es un pool (una combinación database + user). PgBouncer
3) Qué significa cada columna (en cristiano)
Identidad del pool
database: nombre de la base (o entrada de [databases]). PgBouncer
user: usuario que se conecta. PgBouncer
If you have multi-tenant (multiple DBs) or several users, you will seemany queues.
4) Client side (Odoo/your app → PgBouncer)
cl_active
Clientsactive or 'OK': either they are linked to a server, or they are idle with no queries waiting. PgBouncer
Practical reading:this does not mean 'they are executing SQL right now'; it means 'they are not waiting for a connection'.
cl_waiting
Clients thathave already sent queriesbuthave not yet receiveda server connection. PgBouncer
This is the number that matters most to you. GitLab summarizes it perfectly: cl_waiting indicates clients trying to execute a transaction but PgBouncer could not assign them a server connection immediately. Runbooks
cl_active_cancel_req / cl_waiting_cancel_req
These are clients related toquery cancellations:
cl_active_cancel_req: they have already sent a cancel to the server and are waiting for a response. PgBouncer
cl_waiting_cancel_req: they have not yet been able to resend the cancel. PgBouncer
If you see numbers here consistently, it is usually a sign of:
timeouts,
hanging queries,
users canceling a lot,
or backend saturation.
5) Server side (PgBouncer → PostgreSQL)
sv_active
Connections to PostgreSQLlinked to a client(that is: actually in use). PgBouncer
sv_idle
Connectionsfree and reusable now(ready to assign). PgBouncer
If you have cl_waiting > 0 and sv_idle is 0, you are usually 'dry' on free connections.
sv_used
Connections that were idle 'too long' and now require running server_check_query before being reused (depends on server_check_delay). PgBouncer
sv_tested
Connections running server_reset_query or server_check_query. PgBouncer
Practical reading:if it spikes a lot, check your reset/check values and if there is connection churn.
sv_login
Connections 'in the process of logging in' (opening). PgBouncer
Practical reading:if sv_login spikes, it could be:
network latency to Postgres,
Postgres slow on auth,
or pool growing and taking time to open connections.
6) The 'real queue' metric (the one that stands alone)
maxwait (+ maxwait_us)
How long (in seconds)the oldest clientin the queue waited. PgBouncer
PgBouncer states it explicitly: if maxwait starts to rise, the current pool is not processing requests fast enough, either due tooverloaded serveror due topool_size being too small. PgBouncer
7) pool_mode (why reading changes)
pool_mode shows you the effective mode of the pool. PgBouncer
In Odoo, you typically use transaction, where the server connection is assigned only during the transaction and returned to the pool when it ends. PgBouncer
This mattersbecause with long transactions:
connections become 'hijacked'
cl_waiting increases
maxwait increases
even if the CPU is not at 100%.
8) Typical patterns (how to interpret without guessing)
Pattern A — Small pool (or spikes): 'queue but DB ok'
cl_waiting > 0
sv_active near the limit
sv_idle ~ 0
Postgres is not saturated
➡️ Action: increase pool_size/default_pool_size or use reserve_pool_size for burst (and validate with metrics).
Pattern B — Long transactions / locks: 'queue even though there isn't much CPU'
cl_waiting increases
sv_active stable but 'stuck'
maxwait increases steadily
GitLab mentions a typical cause: connections 'hogged' by long queries/transactions. Runbooks
➡️ Action: check pg_stat_activity / locks; cut long transactions; batch in crons; timeouts.
Pattern C — Backend down or slow login
sv_login high
backend connection errors
➡️ Action: check network, Postgres, TLS, auth, DNS, etc.
9) What commands to run right after SHOW POOLS;
1) View exact connections "occupying" backend
SHOW SERVERS;
2) View clients that are waiting
SHOW CLIENTS;
And in PostgreSQL:
SELECT pid, usename, state, xact_start, wait_event_type, wait_event, query FROM pg_stat_activity ORDER BY xact_start NULLS LAST;
Close
SHOW POOLS; it’s not "a number": it’s the panel that tells you if you are:
limited by thepool(PgBouncer config),
limited by theDB(queries/locks),
o limitado por transacciones largas (muy común con Odoo + crons).