Error
KubernetesERRORNotablePod StateHIGH confidence

Container exited with a non-zero exit code

Production Risk

Pod is not serving traffic; if part of a Deployment, other replicas absorb load but capacity is reduced.

What this means

The Error pod state means at least one container in the pod exited with a non-zero exit code and the restart policy has been exhausted or is Never. This is the terminal error state. The specific exit code and application logs reveal the true root cause. It is distinct from CrashLoopBackOff, which involves repeated restart attempts.

Why it happens
  1. 1Application process exited with a non-zero status code
  2. 2restartPolicy is Never or OnFailure and retry attempts are exhausted
  3. 3Fatal runtime exception, panic, or unhandled signal in the application
How to reproduce

Pod shows Error status in kubectl get pods after container terminates abnormally.

trigger — this will error
trigger — this will error
kubectl get pods
# NAME    READY   STATUS   RESTARTS   AGE
# mypod   0/1     Error    0          1m

kubectl logs mypod --previous
kubectl describe pod mypod | grep "Exit Code"

expected output

NAME    READY   STATUS   RESTARTS   AGE
mypod   0/1     Error    0          1m

Fix 1

Read previous container logs

WHEN Always — logs reveal the specific error

Read previous container logs
kubectl logs mypod --previous --tail=200

Why this works

Previous logs capture stderr/stdout from the failed container run.

Fix 2

Check exit code and map to cause

WHEN After reading logs

Check exit code and map to cause
kubectl describe pod mypod | grep -A 5 "Last State:"

Why this works

The exit code narrows down whether the failure is application-level, OOM, signal-based, or runtime.

Sources
Official documentation ↗

Kubernetes Documentation

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors