53200
PostgreSQLERRORCriticalInsufficient ResourcesMEDIUM confidence

out of memory

Production Risk

Persistent OOM conditions can destabilise the Postgres server. If the postmaster itself runs low on memory, it may fail to fork new backends, making the database unreachable. Repeated OOM events indicate a systemic memory configuration problem requiring immediate remediation.

What this means

The Postgres backend process exhausted the memory available to it while building a query plan, executing a sort, hash aggregation, or other memory-intensive operation. The statement is aborted but the server continues running.

Why it happens
  1. 1work_mem is set too high and many concurrent sort/hash operations exceed system RAM
  2. 2A query fetches an extremely large result set into memory (e.g., huge ORDER BY without LIMIT)
  3. 3PL/pgSQL or application code accumulates large datasets in memory
  4. 4The OS OOM killer has limited available memory due to other processes
  5. 5max_connections * work_mem exceeds available RAM
How to reproduce

A query with a large sort exceeds available backend memory.

trigger — this will error
trigger — this will error
-- Set work_mem very low to simulate OOM conditions:
SET work_mem = '64kB';

-- A large sort that cannot fit in 64kB:
SELECT * FROM generate_series(1, 1000000) AS s(n) ORDER BY n DESC;

expected output

ERROR:  out of memory
DETAIL:  Failed on request of size 65536.

Fix 1

Tune work_mem and use streaming/pagination

WHEN When a specific query type consistently triggers OOM.

Tune work_mem and use streaming/pagination
-- Check current setting:
SHOW work_mem;

-- Increase for a specific heavy session only (not globally):
SET work_mem = '256MB';

-- Paginate large result sets instead of loading everything:
SELECT * FROM large_table ORDER BY id LIMIT 1000 OFFSET 0;

Why this works

work_mem controls the memory allocated per sort or hash operation per query node. Increasing it allows operations to complete in memory rather than spilling to disk temp files. Setting it per session avoids multiplying it across all max_connections.

Fix 2

Add an index to avoid in-memory sorts

WHEN When ORDER BY or GROUP BY is causing the memory pressure.

Add an index to avoid in-memory sorts
-- An index on the sort key allows an index scan in sort order,
-- avoiding an in-memory sort entirely:
CREATE INDEX idx_large_table_id ON large_table (id);

EXPLAIN SELECT * FROM large_table ORDER BY id;
-- Should now show "Index Scan" not "Sort"

Why this works

An index scan returns rows in index order, making a separate sort step unnecessary. The planner chooses an index scan when an appropriate index exists and the estimated cost is lower than a sort. This eliminates the need for work_mem for that operation.

What not to do

Set work_mem to multiple gigabytes globally

Multiple concurrent queries each use work_mem per sort node; total memory usage can be work_mem * connections * nodes, easily exhausting system RAM.

Sources
Official documentation ↗

src/backend/utils/mmgr/mcxt.c — MemoryContextAlloc()

Resource Consumption

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All PostgreSQL errors