Skip to Content

Key metrics you should watch (Odoo + PgBouncer + PostgreSQL)

November 29, 2025 by
Key metrics you should watch (Odoo + PgBouncer + PostgreSQL)
John Wolf
| No comments yet

If your stack isOdoo → PgBouncer → PostgreSQL, the metrics that “matter” are not 200 graphs: they are a small set that tells youif you are limited by pool, by DB, by crons, or by host resources.

Odoo comes with integrated HTTP servers, cron, and live chat (depending on configuration, multi-thread or multi-process). Odoo

This means your observability must coverapplication + pooler + DB + system.


1) The 4 “golden” metrics (to start the day)

  1. p95/p99 latencyper endpoint (login, listing, create/write, reports)

  2. Error rate(HTTP 5xx, timeouts, “pool full”, “deadlock detected”)

  3. PgBouncer queue(if there are people waiting for a server connection)

  4. Long transactions and lock waits in Postgres(if the DB is “stuck”)

With this, you usually knowwhereto look.


2) PgBouncer: the dashboard to know if the bottleneck is “pool” or “DB”

A) SHOW POOLS; (your metric #1)

  • cl_waiting: how many clients are trying to execute a transaction and PgBouncer could not assign them a server connection “yet”. If it rises steadily, it is a strong sign of a problem (small pool, or connections held by long transactions). Runbooks

  • maxwait(or similar): the time the oldest client has been waiting (if it increases, you are degrading the experience).

Tip: remember that PgBouncer creates pools by pair(database, user), so checkwhich exact poolis drowning. Runbooks

Practical alerts

  • cl_waiting > 0 sustained for 1–2 min → investigate

  • maxwait > 1s sustained → already impacts UX; >5s is an incident


B) SHOW STATS;

Use it to see throughput and aggregated times (spikes, drops, sharp variations). If your SHOW POOLS is fine but the system is slow, SHOW STATS helps you confirm if there is a drop in TPS or an increase in average time.


C) SHOW SERVERS; and SHOW CLIENTS;

When there is a queue:

  • SHOW CLIENTS; tells you who is waiting

  • SHOW SERVERS; tells you which server connections are busy

What are you looking for

  • Many busy servers and increasing cl_waiting → pool/DB saturated

  • Few active servers but a lot of waiting → usually indicates a slow backend for opening connections, slow auth/TLS, or limits on pool_size / max_db_connections.


3) Odoo: metrics that almost always explain "why it happened"

A) Real concurrency: workers + cron

  • If you have heavy crons, max_cron_threads defines how many are processed concurrently (default 2). Odoo Development

    Key metric:backlog/cron execution time (are they overlapping? are they "eating" into the schedule?)

Practical alerts

  • Cron that should last 2 minutes but lasts 20 minutes → there are almost always locks/long transactions or 'massive work without batching'.

  • Two or more heavy crons running at the same time → typical spikes in cl_waiting.


B) Typical errors you should count as metrics.

  • PoolError: The Connection Pool Is Full.

  • deadlock detected.

  • request/report timeouts.

  • reconnect loops.

If you don't turn them into counters/alerts, you'll see them late (when the user screams).


C) Latency by type of operation.

Separate (even if it's by simple tags):

  • interactive web (forms/lists).

  • imports/massive.

  • PDF reports.

  • integrations (webhooks, APIs).

In Odoo, a 'PDF report' may seem like 'just a screen', but it usually consumes the most CPU/RAM and DB.


4) PostgreSQL: the minimum to know if the problem is locks, long transactions, or resources.

A) pg_stat_activity (real-time).

The activity and statistics view is part of PostgreSQL's cumulative statistics system. PostgreSQL.

What to watch.

  • Long transactions.(xact_start very old).

  • sessions waiting for locks (wait_event_type, wait_event).

  • number of active vs idle sessions.

Practical alerts

  • transaction > 60–120s during peak hours (depends on the business) → investigate.

  • many sessions waiting for locks → check the 'blocking tree'.


B) Locks and deadlocks.

  • metric: number of locks/waits per lock

  • metric: deadlocks per minute (if it appears, it's design/order of locks or high contention)


C) Autovacuum / bloat (if performance "degrades over time")

  • delayed autovacuum + large tables with churn = performance that slowly declines

  • it's not "overnight", but it is a classic Odoo issue with high activity


5) Operating system: the 6 signs you should not ignore

  1. CPU(total usage and "steal" if it's a VM)

  2. Load averagevs cores (if load >> cores consistently, you are saturating)

  3. I/O waitand disk latency (a slow DB "looks like" an app problem)

  4. RAM(swap ≈ disaster for Odoo)

  5. File descriptors(PgBouncer + many clients may require them)

  6. Network(drops/retransmits between Odoo↔PgBouncer↔Postgres)


6) A minimum dashboard (the one you will actually use)

PgBouncer Panel

  • cl_waiting, maxwait

  • sv_active, sv_idle

  • SHOW STATS (TPS / aggregated times)

Odoo Panel

  • p95/p99 latency (by "type")

  • errors (pool full, deadlocks, timeouts)

  • duration and overlap of crons (max_cron_threads matters here). Odoo Development

PostgreSQL Panel

  • active sessions / waiting locks (via pg_stat_activity). PostgreSQL.

  • deadlocks

  • slow queries (if you have pg_stat_statements)

  • autovacuum (trend)

Infra Panel

  • CPU, RAM, swap

  • I/O wait and latency

  • network (drops/retransmits)


7) Quick read: if X happens, look at Y

  • Increase cl_waiting→ SHOW SERVERS + pg_stat_activity (long transactions/locks) Runbooks

  • No queue in PgBouncer but everything is slow→ Postgres (I/O, locks, queries) or host CPU/RAM

  • It slows down at a certain time→ overlapping crons (check concurrency and duration; default cron threads 2). Odoo Development

  • Auth/TLS errors→ metrics of failed connections in PgBouncer + handshake latency


Next chapter ->

Key metrics you should watch (Odoo + PgBouncer + PostgreSQL)
John Wolf November 29, 2025
Share this post
Tags
Archive
Sign in to leave a comment