Pod is being deleted and awaiting graceful shutdown
Production Risk
Stuck Terminating pods can block rolling updates and consume namespace resource quota.
A pod in the Terminating state has received a deletion signal and Kubernetes is waiting for it to shut down gracefully within the terminationGracePeriodSeconds window (default 30s). If the pod does not exit within this window, kubelet sends SIGKILL. Pods stuck in Terminating indefinitely usually have a finalizer that has not been removed or a preStop hook that is hanging.
- 1kubectl delete pod was issued and the pod is shutting down gracefully
- 2A finalizer on the pod is preventing deletion from completing
- 3preStop lifecycle hook is hanging or taking too long
- 4Node is not reachable so kubelet cannot report pod termination
Pod appears stuck in Terminating state for longer than terminationGracePeriodSeconds.
kubectl get pods # NAME READY STATUS RESTARTS AGE # mypod 1/1 Terminating 0 2d kubectl describe pod mypod | grep -E "Finalizers|Conditions"
expected output
NAME READY STATUS RESTARTS AGE mypod 1/1 Terminating 0 2d
Fix 1
Force delete a stuck Terminating pod
WHEN Pod is stuck in Terminating and the node is unreachable or the finalizer is confirmed safe to bypass
kubectl delete pod mypod --grace-period=0 --force
Why this works
Bypasses the graceful shutdown window and removes the pod object from the API server immediately.
Fix 2
Remove blocking finalizer
WHEN kubectl describe shows a finalizer preventing deletion
kubectl patch pod mypod -p '{"metadata":{"finalizers":[]}}' --type=mergeWhy this works
Clearing finalizers allows the API server to complete garbage collection of the pod object.
✕
Kubernetes Documentation
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev