PgBouncer can “talk” a lot… or just enough. In production, what you want is:
to knowwho is connecting and from where,
to detectqueue / pool saturation,
to seereal errors(auth, TLS, backend down),
and to be able to troubleshoot without going into debug for hours.
This post gives you a set of “minimally useful” logs, what each thing means, and what patterns to watch.
1) The goal: actionable logs (not noise)
Logs that you DO want
Connections / disconnections
Errors returned to the client(pooler errors)
Backend events(Postgres not reachable, auth to backend, reset)
Auth/TLS(SASL failures, certificates, etc.)
Reload/startup(to confirm that changes were applied)
Logs that you do NOT want 24/7
Permanent high verbosity (it kills your disk and hides the signal)
Excessive dumps during peak hours (worsen performance and diagnosis)
2) Recommended logging configuration (production baseline)
In pgbouncer.ini:
[pgbouncer] ; Where to log logfile = /var/log/pgbouncer/pgbouncer.log ; or alternatively: ; syslog = 1 ; syslog_ident = pgbouncer ; Useful signal log_connections = 1 log_disconnections = 1 log_pooler_errors = 1 ; Verbosity (0–3). Production: 0 or 1 verbose = 0
What this gives you:
basic traceability of who enters/leaves,
errors that matter (those that impact the app),
without overwhelming you.
Tip: if you are on systemd, it often makes sense tolog to journaldand use journalctl -u pgbouncer (and/or forward to your log stack). If you already have ELK/Loki/Datadog, syslog/journald is usually more convenient than a file.
3) The 3 flags that provide the most value
log_connections
Logs successful connections.
Useful for light auditing (“Are they connecting from where they should?”)
Useful for detecting unusual spikes (changes in connection patterns)
log_disconnections
Logs disconnections + reason.
Here you will see timeouts, resets, “client disconnected”, etc.
It's key to see “it drops every X seconds” (typical of poorly configured health checks or aggressive timeouts)
log_pooler_errors
This is gold: logsthe errors that PgBouncer delivers to the client.
If Odoo says “auth failed” or “no such user”, the real trace usually remains here.
If there are “too many connections” or “pool is full”, it will be here too.
4) When to increase verbosity (and how to do it right)
Don't keep it high permanently. Use it as a “magnifying mode”.
Healthy procedure
Increase verbosity to 1 or 2for a short window
Reproduce the problem
Lower it back down
verbose = 2
Best practices
If you increase verbosity during peak hours, do it for 2–5 minutes, not 2 hours.
Accompany it with a "marker" in logs (note the exact time of the change).
5) The log patterns that really matter to you
A) Auth failing (client → PgBouncer)
What you will see (conceptually):
nonexistent user ("no such user")
invalid password/SCRAM ("SASL authentication failed", "password authentication failed")
PgBouncer HBA (if you use auth_type=hba)
What to do
Confirm that the user exists in userlist.txt or that auth_user/auth_query is correct
Confirm that you did RELOAD (many "not working" are "did not reload")
B) PgBouncer accepts the client, but fails to Postgres (backend)
This is the typical case of "Odoo connects but then crashes when executing things".
You will see messages like:
"server login failed"
"cannot connect to server"
"FATAL" from Postgres propagated
connection timeouts to the backend
What it means
PgBouncer is fine, the problem isthe server connection(network, TLS, credentials, Postgres overloaded, pg_hba.conf, etc.)
C) Pool saturation (queue)
In logs, you will see it as:
"no more connections allowed"
pool/limits errors
clients disconnecting due to timeout
The correct one
Don't guess: confirm with metrics (SHOW POOLS;) and correlation with crons/long transactions.
If the pool is "hijacked", increasing pool_size doesn't always fix it (sometimes it just worsens Postgres).
D) TLS Issues
If you enable TLS, the valuable logs are:
handshake failures
CA/hostname mismatch (in verify)
expired certificates
incompatible protocols/ciphers
Operational key
these errors tend to be "binary": either you connect or you don't, so the log allows you to fix it quickly (cert, CA, SNI/hostname, clock skew).
6) Logs vs SHOW commands: how to combine them
Logs tell you "what happened".
SHOW tells you "how it is now".
When there are incidents, use both:
Logs: what exact error and from which IP/user
SHOW POOLS;: is there a real queue or not
SHOW SERVERS;: which connections are busy
Postgres: pg_stat_activity for long transactions/locks
This avoids the typical mistake of "I increased pool_size and that's it" when the problem was an eternal lock.
7) Log rotation (what almost everyone forgets)
If you log to a file (logfile = ...), define rotation with logrotate.
After rotating, sometimes you need PgBouncerto reopenthe file (depending on the method). The common practice is to send HUP or use RELOAD (depending on how you operate it).
Minimum checklist
daily/weekly rotation
compression
reasonable retention
alerts if the disk exceeds a certain %
8) “Pro level”: what to alert from logs (no metrics yet)
If you don't have Prometheus/Grafana yet, you can get alerts from logs:
auth failures> X/min
backend connect failures> X/min
pool/limits errors(any occurrence = alert)
TLS handshake errors(any occurrence = alert)
unexpected restarts/reloads(config change outside of window)
9) Configs that tend to create unnecessary noise
permanent high verbosity
log_connections + many healthchecks that open/close connections (noise). Better: less aggressive healthchecks or persistent connection.
config changes without a “marker” (then you don't know what to correlate)
Closure
If you have PgBouncer in production, the set that “matters” is:
log_connections=1
log_disconnections=1
log_pooler_errors=1
verbose=0 (and increase it only for incidents)
rotación bien hecha