ExitCode 137
KubernetesCRITICALCommonContainer ErrorHIGH confidence

Container killed by SIGKILL (OOM or manual kill)

Production Risk

OOM kills cause immediate pod restarts and potential data loss; in-flight requests are dropped.

What this means

Exit code 137 equals 128 + 9 (SIGKILL). The container was forcibly killed, most commonly by the Linux kernel OOM killer when the container exceeded its memory limit, or by Kubernetes when a pod is deleted. OOMKilled is the most operationally significant cause and will be reflected in the pod reason field. Repeated OOM kills cause data loss and service instability.

Why it happens
  1. 1Container exceeded its memory limit and was OOM-killed by the kernel
  2. 2Pod was deleted (kubectl delete pod) which sends SIGKILL after grace period
  3. 3Node ran out of memory and the kernel killed the highest-priority process
  4. 4Memory leak in the application consuming unbounded heap
How to reproduce

Pod shows OOMKilled reason in describe output; container restarts with exit code 137.

trigger — this will error
trigger — this will error
kubectl describe pod mypod
# Last State: Terminated  Reason: OOMKilled  Exit Code: 137

kubectl top pod mypod --containers

expected output

Last State:     Terminated
  Reason:       OOMKilled
  Exit Code:    137
  Started:      ...
  Finished:     ...

Fix 1

Increase memory limit in pod spec

WHEN Application legitimately needs more memory

Increase memory limit in pod spec
# In deployment spec
resources:
  requests:
    memory: "512Mi"
  limits:
    memory: "1Gi"

Why this works

Raising the limit gives the container more headroom before the OOM killer triggers.

Fix 2

Profile and fix memory leak

WHEN Memory usage grows unboundedly over time

Profile and fix memory leak
# Monitor memory over time
kubectl top pod mypod --containers
# Use language-specific profiler (pprof, jmap, etc.) to identify leak

Why this works

Fixing the root-cause leak eliminates OOM kills permanently.

Fix 3

Set up VPA (Vertical Pod Autoscaler)

WHEN Memory requirements are unpredictable

Set up VPA (Vertical Pod Autoscaler)
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: mypod-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: mydeployment
  updatePolicy:
    updateMode: "Auto"

Why this works

VPA automatically adjusts resource requests based on observed usage.

Sources
Official documentation ↗

Kubernetes Documentation

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors