Node is not ready to accept pods
Production Risk
A NotReady node reduces cluster capacity. If multiple nodes become NotReady, it can lead to resource shortages and service outages. Pods on the NotReady node will eventually be evicted and rescheduled, which can cause disruption.
A node in the cluster is reporting a 'NotReady' status. This means the kubelet on that node is not healthy or is not communicating with the control plane, and therefore the node cannot have new pods scheduled on it.
- 1The kubelet process on the worker node has crashed or is not running
- 2Network connectivity issues are preventing the node from contacting the API server
- 3The node's resources (CPU, memory, or disk) are completely exhausted
- 4A misconfiguration in the kubelet's startup files
`kubectl get nodes` shows one or more nodes with a 'NotReady' status.
kubectl get nodes
expected output
NAME STATUS ROLES AGE VERSION node-1 Ready <none> 2d v1.23.5 node-2 NotReady <none> 2d v1.23.5 node-3 Ready <none> 2d v1.23.5
Fix 1
Describe the node to see conditions
WHEN To get more details from the control plane about the node's state
kubectl describe node node-2
Why this works
The 'Conditions' section of the output will show the status of checks like 'Ready', 'MemoryPressure', 'DiskPressure', and 'PIDPressure', often with messages explaining the failure.
Fix 2
Check the kubelet service on the node
WHEN You have SSH access to the affected node
ssh node-2 "systemctl status kubelet"
Why this works
Check if the kubelet service is active and running. If it is not, you should inspect its logs for errors using `journalctl -u kubelet`.
Fix 3
Drain the node to reschedule pods
WHEN The node is unresponsive and you want to move its workloads elsewhere
kubectl drain node-2 --ignore-daemonsets
Why this works
This command safely evicts all pods from the node, allowing the scheduler to recreate them on healthy nodes. This is a good first step to restoring service while you debug the node itself.
✕ Forcefully delete the Node object from the API server
This does not fix the underlying problem on the node itself. The kubelet, if it comes back online, will just re-register the node. It makes debugging harder.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev