could not open file
Production Risk
If 58000 occurs when accessing relation files rather than COPY files, it indicates heap or index file loss, which is a data loss event requiring immediate recovery from backup. Check pg_class to map the OID in the error message to a table name.
Postgres could not open a file needed for a database operation. This includes relation data files, WAL segments, configuration files, or external files referenced by COPY or pg_read_file(). The OS returned an error (ENOENT, EACCES, EMFILE, etc.).
- 1A relation file (heap or index) has been deleted or moved outside of Postgres
- 2COPY FROM FILENAME referencing a file that does not exist or is not readable by the postgres OS user
- 3Too many open file descriptors (EMFILE): max_files_per_process or OS ulimit is too low
- 4Permissions on the data directory or a tablespace directory changed
- 5A tablespace symlink is broken after a migration or storage reconfiguration
A COPY command references a server-side file that does not exist.
COPY users FROM '/tmp/nonexistent_file.csv' CSV HEADER;
expected output
ERROR: could not open file "/tmp/nonexistent_file.csv" for reading: No such file or directory
Fix 1
Verify the file path and permissions
WHEN When using COPY FROM/TO with a server-side file.
-- Check if the file exists and is readable (run as postgres OS user): -- ls -la /tmp/nonexistent_file.csv -- Use COPY with stdin for client-side files (avoids server file access): -- psql -c "\COPY users FROM '/local/path/file.csv' CSV HEADER" -- (\COPY streams from the client, no server file access needed)
Why this works
COPY FROM with a bare filename opens the file on the server as the postgres OS user. \COPY (the psql meta-command) streams the file from the client over the protocol connection, bypassing server-side file access entirely. \COPY is safer and more portable for client-side files.
Fix 2
Increase the open file descriptor limit
WHEN When EMFILE errors indicate file descriptor exhaustion.
-- Check current setting: SHOW max_files_per_process; -- Increase in postgresql.conf: -- max_files_per_process = 1000 -- Also increase OS ulimit for the postgres user: -- ulimit -n 65536 (in the postgres service start script) SELECT pg_reload_conf();
Why this works
Postgres tracks open file descriptors per backend in the virtual file descriptor (VFD) layer. max_files_per_process limits how many VFDs a single backend opens simultaneously. When this limit is reached Postgres closes and reopens files as needed; if the OS ulimit is too low, open() fails with EMFILE.
✕ Run Postgres as root to bypass file permission errors
Running a network service as root is a critical security vulnerability; fix permissions instead.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev