2013
MariaDBERRORCommonConnectionHIGH confidence

Lost connection to MySQL server during query

Production Risk

HIGH — active queries fail mid-execution.

What this means

Client error 2013 is raised when the connection to the server is lost while a query is actively executing — as opposed to error 2006 which happens on an idle connection. Common causes are net_read_timeout, net_write_timeout expiry, or a query result exceeding max_allowed_packet.

Why it happens
  1. 1Query takes longer than net_read_timeout to return the first row (client-side timeout)
  2. 2Result set is larger than max_allowed_packet — server drops the connection mid-transfer
  3. 3Server OOM or crash during query execution
  4. 4Network interruption during a long-running query
  5. 5Client-side read timeout set too low in the driver or framework
How to reproduce

Executing a long-running query or fetching a large result set.

trigger — this will error
trigger — this will error
SELECT * FROM audit_log;  -- millions of rows, large BLOBs

expected output

ERROR 2013 (HY000): Lost connection to MySQL server during query

Fix 1

Increase net_read_timeout and net_write_timeout

WHEN When long-running queries time out before completing.

Increase net_read_timeout and net_write_timeout
SET GLOBAL net_read_timeout = 300;   -- 5 minutes
SET GLOBAL net_write_timeout = 300;

Why this works

net_read_timeout is how long the server waits for the client to read data; net_write_timeout is how long it waits for the client to send data. Increasing both allows longer operations.

Fix 2

Increase max_allowed_packet

WHEN When fetching rows with large BLOB or TEXT columns.

Increase max_allowed_packet
SET GLOBAL max_allowed_packet = 134217728;  -- 128 MB

Why this works

The result set is broken into packets; if any single packet (row) exceeds max_allowed_packet the server drops the connection.

Fix 3

Paginate large queries instead of fetching all rows

WHEN When the full result set is too large to transfer in one go.

Paginate large queries instead of fetching all rows
-- Use LIMIT + OFFSET for pagination:
SELECT * FROM audit_log ORDER BY id LIMIT 1000 OFFSET 0;
SELECT * FROM audit_log ORDER BY id LIMIT 1000 OFFSET 1000;
-- Or cursor-based pagination with a WHERE id > last_seen_id

Why this works

Fetching data in pages reduces peak memory usage and avoids packet size limits.

What not to do

Set net_read_timeout to unlimited (0)

This disables the timeout entirely, allowing runaway queries to hold connections open indefinitely.

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All MariaDB errors