What Does the Pending State Mean for a Pod in Kubernetes?
The Pending state is the initial phase of a Pod's lifecycle in Kubernetes. When you create a Pod (for example, via kubectl apply -f pod.yaml), the system accepts it, but the scheduler has not yet found a suitable node to run the containers. The Pod remains in this state until all conditions for placement are met.
This typically looks like:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-5d89d7c8b9-abcde 0/1 Pending 0 2m
If a Pod remains in Pending for an extended period (several minutes or more), it indicates a problem preventing the scheduler from selecting a node. Let's review the most common causes and solutions.
Common Causes
Here are the most frequent reasons a Pod does not leave the Pending state:
- Insufficient resources on nodes
The Pod requests more CPU or memory viaresources.requeststhan is available on any node in the cluster. For example, if all nodes have 2 GB of free memory and the Pod requests 4 GB, it will not be scheduled. - Mismatched taints and tolerations
Nodes may have taints (e.g.,dedicated=production:NoSchedule) that repel Pods without corresponding tolerations in the manifest. If a Pod lacks tolerations for such taints, the scheduler will not place it. - PersistentVolumeClaim (PVC) issues
If a Pod requires a volume viavolumeMountsandpersistentVolumeClaim.claimName, and the corresponding PVC is in aPendingstate (e.g., no available PV or storage class is unavailable), the Pod will also remain Pending. - Node selector, affinity, or anti-affinity constraints
Conditions inspec.nodeSelector,spec.affinitymay be too restrictive. For example, if you specifiednodeSelector: {disktype: ssd}but no node has that label, the Pod will not be scheduled. - ResourceQuota exhaustion
AResourceQuotain the namespace may limit the total number of Pods, CPU, or memory. If the quota is exceeded, new Pods will remain Pending. - Pod density limit on nodes
ThemaxPodsparameter in the kubelet configuration (default is usually 110) may be reached on nodes, preventing the scheduler from adding more Pods. - No suitable nodes available
All nodes may be cordoned for maintenance (e.g., viakubectl cordon) or be in aNotReadystate.
Troubleshooting Steps
Step 1: Analyze Pod Events with kubectl describe
This is the first and most critical step. The kubectl describe pod command shows Events where the scheduler explains why it cannot place the Pod.
- Find the Pod name and namespace:
kubectl get pods --all-namespaces | grep Pending - Describe the Pod, replacing
<pod-name>and<namespace>:kubectl describe pod <pod-name> -n <namespace> - In the output, locate the Events section. Example message:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/myapp-5d89d7c8b9-abcde to node-1 Warning FailedScheduling 4m default-scheduler 0/3 nodes are available: 3 Insufficient cpu.
Here,Insufficient cpuindicates a CPU shortage.
Other possible messages:0/3 nodes are available: 3 node(s) had taint {key:value}, that the pod didn't tolerate.0/3 nodes are available: 2 node(s) had volume node affinity conflict.persistentvolumeclaim "my-pvc" not found
💡 Tip: If there are no events, the Pod may not yet have been processed by the scheduler. Wait 1-2 minutes and try again.
Step 2: Check Node Resource Availability
After identifying the cause from events (e.g., Insufficient memory), verify actual available resources.
- View overall node resource usage:
kubectl top nodes
Output:NAME CPU(cores) MEMORY(bytes) node-1 150m 1900Mi node-2 200m 2100Mi
NoteMEMORY(bytes)—if it's close to the node's limit, this could be the cause. - Describe a specific node (e.g.,
node-1) in detail:kubectl describe node node-1
In theAllocated resourcessection, you'll see how many resources are already allocated to Pods. Compare withCapacity. - If resources are exhausted, you have options:
- Increase cluster resources: add a new node (e.g., via
kubectl scale --replicas=2 deployment/<deployment-name>if using a Deployment with autoscaling, or manually through your cloud provider). - Reduce Pod resource requests: edit the Pod manifest and lower
resources.requests.cpuandresources.requests.memoryvalues. For example, from2Gito1Gi.
- Increase cluster resources: add a new node (e.g., via
Step 3: Review and Adjust the Pod Manifest
Often, the issue lies in the Pod or Deployment manifest. Check these sections:
- Resource requests (
resources.requests)
Ensure requests do not exceed typical node resources. For example, if your nodes have 2 GB RAM, don't request 1.5 GB per container if you plan to run multiple containers.spec: containers: - name: myapp image: myapp:latest resources: requests: memory: "256Mi" # ↓ reduce if needed cpu: "250m" - Node selector and affinity
Check for overly specific selectors:spec: nodeSelector: disktype: ssd # Ensure nodes have this label: kubectl get nodes --show-labels
If the label is missing, the Pod won't schedule. Either removenodeSelectoror add the label to a node:kubectl label node <node-name> disktype=ssd. - Tolerations
If nodes have taints (e.g., for dedicated workloads), the Pod must have matching tolerations:spec: tolerations: - key: "dedicated" operator: "Equal" value: "production" effect: "NoSchedule" - Affinity/anti-affinity
Complex affinity rules may fail to find suitable nodes. Temporarily simplify or remove theaffinitysection to test.
After making changes, apply the manifest:
kubectl apply -f pod.yaml
Step 4: Check PersistentVolumeClaim (PVC) and ResourceQuota
If the Pod uses storage, ensure the PVC is ready.
- Check PVC status:
kubectl get pvc -n <namespace>
If the PVC isPending, there's a volume issue. Describe it:kubectl describe pvc <pvc-name> -n <namespace>
Possible causes:- No available PersistentVolume (PV) with a matching storage class (
storageClassName). - A PV exists but is already bound to another PVC.
- The storage class is not supported in the cluster.
Solution: create a suitable PV, change thestorageClassNamein the PVC, or use dynamic provisioning. - No available PersistentVolume (PV) with a matching storage class (
- Check ResourceQuota in the namespace:
kubectl get resourcequota -n <namespace> kubectl describe resourcequota <quota-name> -n <namespace>
In the output, look athardandusedlimits. Ifusedis close tohard, new Pods won't be created. Increase the quota or remove unused resources.
Prevention
To avoid Pods getting stuck in Pending in the future:
- Set realistic resource requests: Test your application and base
requestson actual usage, not with a 100% safety margin. - Use cluster autoscaling: If in the cloud, configure Cluster Autoscaler to add nodes when resources are low.
- Monitor resource usage regularly: Tools like
kubectl top, Prometheus, and Grafana help visualize trends and predict shortages. - Configure taints/tolerations and affinity correctly: Document which nodes are intended for which workloads and verify alignment.
- Manage resource quotas: Set ResourceQuota in namespaces with some buffer, but not excessively high, to prevent resource "stampedes".
- Verify storage status: Ensure storage classes (StorageClass) are available and PVs are created correctly.
Following these guidelines will significantly reduce the likelihood of Pods hanging in the Pending state and ensure stable application operation in Kubernetes.