Not all companies need "enterprise architecture", but all needstability. The most practical way to get it right in Odoo is to choose a base configuration according to size (and then adjust with metrics).
In this post, I provide you withthree profiles(small, medium, large) with recommended values for:
Odoo: workers, max_cron_threads, db_maxconn
PgBouncer: pool_mode, default_pool_size, reserve_pool_size, max_client_conn
PostgreSQL: max_connections (indicative)
Minimum observability
Assumptions:
Odoo behind Nginx/HAProxy (TLS)
PgBouncer in pool_mode=transaction for ORM traffic (the typical setup with Odoo)
1 main DB per instance (if you have multi-tenant, see note at the end)
0) Quick rule for reading this guide
Workersprovide web concurrency (the ability to handle requests).
PgBouncerconverts that concurrency into a controlled number of actual connections to Postgres.
Cron jobscan destroy your p99 if they run too much in parallel.
If in doubt: start conservatively, measure SHOW POOLS; and adjust.
Profile 1: Small company (SME / 10–50 users)
Typical hardware
Odoo/App: 2–4 vCPU, 8–16 GB RAM
Postgres: same host or separate (ideal 2–4 vCPU, 8–16 GB RAM)
Odoo (odoo.conf)
workers = 5–9
max_cron_threads = 1
db_maxconn = 32
Why this way:
The number one enemy here is swap and crons that overlap. Better to have less concurrency, but stable.
PgBouncer (pgbouncer.ini)
pool_mode = transaction
default_pool_size = 20–40
reserve_pool_size = 5–10
reserve_pool_timeout = 5
max_client_conn = 500–1000
PostgreSQL (guideline)
max_connections = 150–250 (don't inflate it "just in case"; PgBouncer manages)
"Signals" to watch for
cl_waiting in PgBouncer (if it appears sustained, first check crons/locks)
duration of the heaviest cron
Profile 2: Medium-sized company (50–300 users)
Typical hardware
Odoo/App: 8 vCPU, 32 GB RAM
Postgres: 8 vCPU, 32–64 GB RAM (ideally separate)
Odoo
workers = 13–17
max_cron_threads = 2
db_maxconn = 64
Why this way:
Throughput matters here. Two crons in parallel is fine, but if you have massive crons, you'll need to do batching.
PgBouncer
pool_mode = transaction
default_pool_size = 60–100
reserve_pool_size = 15–30
reserve_pool_timeout = 5
max_client_conn = 1500–3000
Tip:if default_pool_size increases and Postgres worsens, stop increasing: you are in contention/locks.
PostgreSQL (guideline)
max_connections = 250–400
"Signals" to watch for
maxwait in PgBouncer
transactions > 2 min in Postgres (predict queue)
p99 in critical operations (validations, closures, inventory)
Profile 3: Large company (300–1500+ users / intensive operation)
Typical hardware (reasonable minimum)
Odoo/App: 16–32 vCPU, 64–128 GB RAM (often 2+ Odoo nodes)
Postgres: 16–32 vCPU, 128 GB+ RAM, fast disks (IOPS matter)
Odoo
workers = 25–45 (depending on cores and RAM)
max_cron_threads = 2–4 (only if crons are well designed)
db_maxconn = 64 (often better not to raise it further; scale PgBouncer/DB)
Why this way:
In large setups, the enemy is contention (locks) and long transactions. More threads do not fix design.
PgBouncer
pool_mode = transaction
default_pool_size = 120–200 (if Postgres supports it)
reserve_pool_size = 30–60
reserve_pool_timeout = 5
max_client_conn = 5000–10000
Extra important:define limits:
max_db_connections to prevent a DB from "eating" everything
and alerts on cl_waiting before incidents
PostgreSQL (guideline)
max_connections = 400–800 (with well-configured PgBouncer, you don't need thousands)
"Signals" to watch for
locks and recurring blockages (deadlocks/serialization)
growth of p99 when crons or imports are running
autovacuum (if it lags, performance degrades over time)
Note: if you have multi-tenant (multiple DBs in the same PgBouncer)
Remember that PgBouncer creates pools by (db, user). This means that:
if you have N databases, the “budget” of real connections is distributed among N pools
it is advisable to use max_db_connections and calculate default_pool_size with that distribution
“Minimum common” config for any size (the database that does not fail)
Odoo
proxy_mode = True
/websocket/ routed to the gevent_port (if you use livechat/real time)
list_db = False
logs with rotation
PgBouncer
pool_mode = transaction
useful logs (log_pooler_errors=1)
SHOW POOLS; as metric #1
PostgreSQL
reasonable timeouts (lock_timeout, statement_timeout)
monitoring long transactions
Conclusion: the best advice for this to work
You do not size “by intuition,” you size by signals:
If there is cl_waiting: either the pool is small or there are long transactions/locks.
If Postgres is at 100%: do not increase the pool, optimize queries/indexes or reduce concurrency.
Si los crons se pisan: reduce paralelismo o haz batching.