RunContainerError
KubernetesERRORCommonPod LifecycleHIGH confidence

Error running the container

Production Risk

The application container cannot start, leading to service unavailability.

What this means

The container runtime (e.g., containerd, CRI-O) failed to start the container after it was successfully configured. This error points to issues with the container image's entrypoint, command, or low-level runtime problems.

Why it happens
  1. 1The command specified in the container image's entrypoint is not in the PATH or does not exist
  2. 2Incorrect file system permissions on required binaries or directories inside the container
  3. 3A misconfiguration in the container runtime on the node
How to reproduce

A pod fails to start, showing RunContainerError in its status.

trigger — this will error
trigger — this will error
kubectl describe pod my-app-pod

expected output

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  Failed            30s (x3 over 1m)   kubelet            Error: RunContainerError: failed to start container "my-app": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "my-command": executable file not found in $PATH: unknown

Fix 1

Check the image's Dockerfile

WHEN The error indicates an executable is not found

Check the image's Dockerfile
FROM alpine:3.15
# Incorrect: uses 'my-command' which does not exist
CMD ["my-command"]

Why this works

Review the Dockerfile's ENTRYPOINT and CMD instructions to ensure they reference a valid executable binary present in the container's file system and PATH.

Fix 2

Override the command in the pod spec

WHEN To test a different command without rebuilding the image

Override the command in the pod spec
kubectl patch deployment my-app --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["/bin/sh", "-c", "sleep 3600"]}]'

Why this works

This patches the deployment to override the container's default command. If the pod starts successfully with a simple command like sleep, it confirms the issue is with the original command or entrypoint in the image.

What not to do

Reboot the node

While a malfunctioning container runtime could be the cause, it is far more likely to be an issue with the container image or pod spec. Rebooting the node is a disruptive step that should only be taken after other causes are ruled out.

Sources
Official documentation ↗

k8s.io/kubernetes/pkg/kubelet/container/runtime.go

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors