Skip to Content

When PgBouncer is NOT enough (and what to do next)

December 27, 2025 by
When PgBouncer is NOT enough (and what to do next)
John Wolf
| No comments yet

PgBouncer is great formanaging connectionsand smoothing out spikes. But there comes a time when it can no longer save you: when the real problemis not how many connections you have, butwhat those connections are doing(or how long they take to finish).

This post helps you recognize that point and choose the next correct step.


1) If your bottleneck is thequery, PgBouncer is not going to “speed up” the DB

Typical signs

  • high p95/p99 even with low or zero cl_waiting

  • Postgres with high CPU/I/O and few connections

  • “everything becomes slow” even if you increase the pool

What to do

  • enable/use pg_stat_statements and target top queries

  • check indexes (especially on large sales, stock, accounting tables)

  • optimize domains/searches in Odoo, computed fields, giant search_read

  • check PDF reports (they are often the “hidden spike”)

Key idea:PgBouncer organizes the queue; it doesn’t cook faster.


2) If there arelocksor long transactions, PgBouncer can worsen the perception

In pool_mode=transaction, a long transaction “hijacks” a server connection until it finishes. If there are also locks, the pool fills up with waiting transactions.

Typical signs

  • cl_waiting rises, maxwait rises

  • in Postgres: wait_event_type = Lock, transactions with very old xact_start

  • slowness "in waves" (especially during cron hours)

What to do

  • cut transactions:batchingin crons (small batches)

  • avoid external I/O within transactions (APIs/requests)

  • lock_timeout, statement_timeout, idle_in_transaction_session_timeout

  • order locks (same sequence of updates everywhere)


3) If your DB is limited byI/O, PgBouncer does not compensate for slow disks

Typical signs

  • high I/O wait, high disk latency

  • DB "crawls" even if you reduce concurrency

  • spikes during reporting/accounting/stock hours

What to do

  • improve storage (IOPS/latency)

  • adjust autovacuum and monitor bloat

  • cache and "working set" in RAM (buffers)

  • separate workloads (heavy reports vs OLTP)


4) If the problem isRAM/swapin Odoo, PgBouncer is irrelevant

PgBouncer does not prevent Odoo from consuming RAM with workers, reports, and heavy modules.

Typical signs

  • non-zero swap during peak hours

  • unpredictable latency, rare timeouts

  • Odoo "revives" upon restart

What to do

  • reduce/adjust workers and memory limits

  • review PDF generation and bulk jobs

  • separate workers vs crons (if your operation allows)

  • size real RAM per worker (and measure RSS)


5) If your problem is “too many crons”, PgBouncer only shows you the queue

Parallel and massive crons can saturate the DB due to contention and long transactions.

Typical signs

  • strong degradation during time windows (early morning/closing)

  • cron backlog, overlaps, “eternal” jobs

What to do

  • lower max_cron_threads (start with 1–2)

  • rewrite crons to batches

  • stagger schedules and isolate heavy crons

  • move integrations to external queues (job queue) if already in “factory mode”


6) If you have session requirements(LISTEN/NOTIFY, prepared statements), PgBouncer “transaction” is not sufficient (LISTEN/NOTIFY, prepared statements), PgBouncer “transaction” no alcanza

Typical signs

  • unstable real-time (bus), strange behavior with notifications

  • issues with drivers that depend on session (depending on the stack)

What to do

  • maintain transaction pooling for the normal ORM

  • and for the “session-bound”: direct connection or dedicated session pool

  • divide pools by traffic type


7) If you have already scaled Odoo and are still at the limit, you needarchitecture, not just pooling

PgBouncer does not replace:

  • horizontal scaling of Odoo (multiple nodes)

  • caches (Redis/HTTP cache) to reduce repeated reads

  • read replicas (when the pattern allows)

  • separation of workloads (reporting vs transactional)

  • refactor heavy processes and queues

Typical signs

  • you have already adjusted pools and workers, and the limit barely moves

  • the load increases and the operational "margin" disappears

What to do

  • multi-node Odoo + load balancing

  • cache where it hurts (catalogs, pages, repeated endpoints)

  • separate reporting/ETL (replica or pipeline)

  • govern "heavy work" with queues and rate limits


8) Quick checklist: "Is PgBouncer the bottleneck or just the messenger?"

  1. SHOW POOLS;

  • Does cl_waiting increase? (there is a queue)

  1. Postgres

  • Are there locks? Long xacts? High I/O wait?

  1. Odoo

  • Is there swap? Overlapping crons? Does p99 explode due to PDFs?

  1. If the pool is fine but the system is slow →it's not PgBouncer.

Conclusion

PgBouncer isnecessaryin many scenarios, but it is not "the final solution." It is traffic control. When the real problem isqueries, locks, I/O, RAM, or job design, you need to address those layers.

When PgBouncer is NOT enough (and what to do next)
John Wolf December 27, 2025
Share this post
Tags
Archive
Sign in to leave a comment