Skip to Content

Impact of cron jobs and long transactions (Odoo + PgBouncer): why they "break" your concurrency

November 26, 2025 by
Impact of cron jobs and long transactions (Odoo + PgBouncer): why they "break" your concurrency
John Wolf
| No comments yet

There are two things that can cause an apparently "well-sized" Odoo to suddenly become slow:

  1. Cron jobs running in parallel(and touching many rows)

  2. Long transactions(long transactions) that keep the DB "busy" for too long

When you add PgBouncer (especially in pool_mode=transaction), the effect is amplified:each long transaction "hijacks" a server connection from the pool, and what you see in the app is not "slow Postgres", butqueue.


1) What happens in practice (the causal chain)

A) Crons: more parallelism = more pressure on DB

Odoo allows running concurrent crons with max_cron_threads (default is usually 2).

That's great... until your crons:

  • process thousands of records,

  • update hot tables,

  • trigger recomputes,

  • or perform slow integrations (APIs + writing to DB).

Result:the number of simultaneous transactions increases.


B) Long transactions: the "silent blocking"

A long transaction not only "takes time": it can also:

  • retain locks (and block others),

  • increase contention,

  • and, with PgBouncer in transaction pooling,retain a server connection until it finishes.

PgBouncer makes it clear: in transaction pooling, a server connection is released whenthe transaction ends.


2) Typical symptoms (what you see in production)

In PgBouncer

  • cl_waiting starts to rise (clients waiting for a server connection)

    GitLab runbooks summarize it: cl_waiting indicates that clients want to execute a transaction but PgBouncer could not assign them a server connection immediately, and a common cause islong transactions“hogging” connections.


In Odoo

  • latency spikes (forms that take “in waves”)

  • intermittent timeouts

  • crons that overlap and become eternal (snowball effect)


In PostgreSQL

  • sessions with active state for a long time

  • locks that accumulate (a long transaction can hold locks and block writes)


3) Why crons “exaggerate” the problem

Because crons are oftenmassive jobs:

  • they process large batches,

  • they do write() in a loop,

  • they recalculate fields,

  • they create/validate documents,

  • or clean data.

And since max_cron_threads enables concurrency,suddenly you have N massive transactions at the same time.

Additionally, Odoo includes a cron server within the stack (along with HTTP and live-chat), and in production, multi-processing is usually used.


4) Quick diagnostic checklist (copy/paste)

A) PgBouncer: is there a queue?

In the admin console:

SHOW POOLS;
SHOW STATS;

Qué te interesa:

  • cl_waiting subiendo → falta conexión server disponible o conexiones “secuestradas”.


B) PostgreSQL: ¿quién está “abrazando” la DB?

SELECT pid, usename, application_name, state, xact_start, query_start, wait_event_type, wait_event, query
FROM pg_stat_activity
WHERE datname = current_database()
ORDER BY xact_start NULLS LAST;

Fíjate en:

  • xact_start muy antiguo (transacción larga)

  • wait_event_type relacionado a locks


C) Locks: ¿hay bloqueos encadenados?

SELECT locktype, mode, granted, count(*)
FROM pg_locks
GROUP BY 1,2,3
ORDER BY 4 DESC;


5) Cómo mitigarlo (sin “bajar todo”)


1) Domar la concurrencia de cron

  • Empieza con max_cron_threads = 1–2 y sube solo con evidencia.

  • Si tienes un cron “monstruo”, no lo corras al mismo tiempo que otros pesados (separa horarios).


2) Evitar transacciones largas en crons (el fix #1)

En vez de “procesar todo”, procesa en lotes:

  • busca IDs en chunks (ej. 500–2000)

  • procesa

  • confirma/invalida lo necesario

  • y vuelve por el siguiente lote

Beneficio: cada lote es una transacción más corta ⇒ liberas conexiones en PgBouncer antes ⇒ menos cl_waiting.


3) Identificar “long transactions” por diseño

Causas comunes:

  • crons que hacen demasiados write() en loop

  • integraciones externas dentro de la misma transacción

  • reports/mass actions that the user executes and get “stuck”

Golden rule:External I/O (APIs) outside of the transactionwhenever possible.


4) Protections in Postgres

  • statement_timeout: prevents infinite queries

  • lock_timeout: prevents waiting for locks forever (better to fail fast and retry)

  • idle_in_transaction_session_timeout: kills forgotten “idle in transaction” sessions


5) Protections in PgBouncer (if there are spikes)

  • use reserve_pool_size for short bursts

  • monitor cl_waiting (if it grows sustained, it’s not a “spike”: it’s design or query)


6) The advice that saves the most incidents

If your system becomes slowat certain timesand coincides with crons:

  • Do not increase default_pool_size first.

  • First confirm if there arelong transactionsholding connections.

  • Luego reduce duración (batching) o reduce concurrencia (max_cron_threads) antes de “darle más conexiones” a Postgres.

Next chapter ->

Impact of cron jobs and long transactions (Odoo + PgBouncer): why they "break" your concurrency
John Wolf November 26, 2025
Share this post
Tags
Archive
Sign in to leave a comment