Namespace resource quota exceeded for a specific resource type
Production Risk
High — quota exhaustion blocks all new pod scheduling and deployments in the namespace, causing deployment failures and potential service degradation.
A request to create or update a resource in the namespace was denied because it would exceed the namespace's ResourceQuota for a specific resource (CPU requests/limits, memory requests/limits, pod count, PVC count, service count, etc.). Unlike a node-level resource shortage, this is a namespace-level administrative limit enforced by the API server.
- 1Namespace CPU or memory request/limit quota is exhausted by existing workloads.
- 2Pod count quota (count/pods) has been reached in the namespace.
- 3PersistentVolumeClaim count or storage capacity quota is exhausted.
- 4ConfigMap, Secret, or Service count quotas are reached.
- 5A burst of deployments or CronJob runs simultaneously filled the quota.
kubectl apply or a controller create/update request is rejected by the API server with a quota exceeded message.
kubectl apply -f my-deployment.yaml # Error from server (Forbidden): error when creating "my-deployment.yaml": # pods "my-pod" is forbidden: exceeded quota: my-quota, # requested: requests.cpu=500m, used: requests.cpu=1900m, limited: requests.cpu=2
expected output
Error from server (Forbidden): pods "my-pod" is forbidden: exceeded quota: team-quota, requested: requests.memory=512Mi, used: requests.memory=3584Mi, limited: requests.memory=4Gi
Fix 1
Check current quota usage in the namespace
WHEN First step to understand which resource is exhausted
kubectl describe resourcequota -n my-namespace # Shows: used vs hard for each resource type
Why this works
ResourceQuota describe output shows the current consumption and the hard limits, making it easy to see which resource is over-allocated.
Fix 2
Reduce resource requests on existing deployments
WHEN Workloads have over-provisioned requests relative to actual usage
# Check actual usage with metrics-server kubectl top pods -n my-namespace # Then lower requests in the deployment kubectl set resources deployment my-deploy \ --requests=cpu=100m,memory=128Mi
Why this works
Lowering requests frees quota headroom for new workloads without increasing the quota limit.
Fix 3
Increase the ResourceQuota limit
WHEN The namespace legitimately needs more resources
kubectl edit resourcequota my-quota -n my-namespace # Increase the hard.requests.cpu or hard.requests.memory values
Why this works
Raising the quota hard limit allows more resources to be requested in the namespace; coordinate with cluster admin for capacity planning.
Fix 4
Scale down or delete unused workloads
WHEN Stale or idle deployments are consuming quota
kubectl get deployments -n my-namespace kubectl scale deployment idle-deploy --replicas=0 -n my-namespace
Why this works
Scaling idle workloads to zero releases their quota reservations immediately.
✕
✕
Kubernetes Documentation — Resource Quotas
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev