ResourceQuotaExceededNamespace
KubernetesERRORCommonResource ManagementHIGH confidence

Namespace resource quota exceeded for a specific resource type

Production Risk

High — quota exhaustion blocks all new pod scheduling and deployments in the namespace, causing deployment failures and potential service degradation.

What this means

A request to create or update a resource in the namespace was denied because it would exceed the namespace's ResourceQuota for a specific resource (CPU requests/limits, memory requests/limits, pod count, PVC count, service count, etc.). Unlike a node-level resource shortage, this is a namespace-level administrative limit enforced by the API server.

Why it happens
  1. 1Namespace CPU or memory request/limit quota is exhausted by existing workloads.
  2. 2Pod count quota (count/pods) has been reached in the namespace.
  3. 3PersistentVolumeClaim count or storage capacity quota is exhausted.
  4. 4ConfigMap, Secret, or Service count quotas are reached.
  5. 5A burst of deployments or CronJob runs simultaneously filled the quota.
How to reproduce

kubectl apply or a controller create/update request is rejected by the API server with a quota exceeded message.

trigger — this will error
trigger — this will error
kubectl apply -f my-deployment.yaml
# Error from server (Forbidden): error when creating "my-deployment.yaml":
# pods "my-pod" is forbidden: exceeded quota: my-quota,
# requested: requests.cpu=500m, used: requests.cpu=1900m, limited: requests.cpu=2

expected output

Error from server (Forbidden): pods "my-pod" is forbidden: exceeded quota: team-quota,
requested: requests.memory=512Mi, used: requests.memory=3584Mi, limited: requests.memory=4Gi

Fix 1

Check current quota usage in the namespace

WHEN First step to understand which resource is exhausted

Check current quota usage in the namespace
kubectl describe resourcequota -n my-namespace
# Shows: used vs hard for each resource type

Why this works

ResourceQuota describe output shows the current consumption and the hard limits, making it easy to see which resource is over-allocated.

Fix 2

Reduce resource requests on existing deployments

WHEN Workloads have over-provisioned requests relative to actual usage

Reduce resource requests on existing deployments
# Check actual usage with metrics-server
kubectl top pods -n my-namespace

# Then lower requests in the deployment
kubectl set resources deployment my-deploy \
  --requests=cpu=100m,memory=128Mi

Why this works

Lowering requests frees quota headroom for new workloads without increasing the quota limit.

Fix 3

Increase the ResourceQuota limit

WHEN The namespace legitimately needs more resources

Increase the ResourceQuota limit
kubectl edit resourcequota my-quota -n my-namespace
# Increase the hard.requests.cpu or hard.requests.memory values

Why this works

Raising the quota hard limit allows more resources to be requested in the namespace; coordinate with cluster admin for capacity planning.

Fix 4

Scale down or delete unused workloads

WHEN Stale or idle deployments are consuming quota

Scale down or delete unused workloads
kubectl get deployments -n my-namespace
kubectl scale deployment idle-deploy --replicas=0 -n my-namespace

Why this works

Scaling idle workloads to zero releases their quota reservations immediately.

What not to do

Sources
Official documentation ↗

Kubernetes Documentation — Resource Quotas

Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev

← All Kubernetes errors