Other PendingHigh

K8s Pod Pending: Causes and Quick Fixes

This article explains why a Pod remains in Pending state instead of transitioning to Running and offers proven troubleshooting methods.

Updated at February 16, 2026
10-20 min
Medium
FixPedia Team
Применимо к:Kubernetes 1.20+Minikube 1.25+k3s 1.20+OpenShift 4.6+

What Does the Pending State Mean for a Pod in Kubernetes?

The Pending state is the initial phase of a Pod's lifecycle in Kubernetes. When you create a Pod (for example, via kubectl apply -f pod.yaml), the system accepts it, but the scheduler has not yet found a suitable node to run the containers. The Pod remains in this state until all conditions for placement are met.

This typically looks like:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
myapp-5d89d7c8b9-abcde   0/1     Pending   0          2m

If a Pod remains in Pending for an extended period (several minutes or more), it indicates a problem preventing the scheduler from selecting a node. Let's review the most common causes and solutions.

Common Causes

Here are the most frequent reasons a Pod does not leave the Pending state:

  1. Insufficient resources on nodes
    The Pod requests more CPU or memory via resources.requests than is available on any node in the cluster. For example, if all nodes have 2 GB of free memory and the Pod requests 4 GB, it will not be scheduled.
  2. Mismatched taints and tolerations
    Nodes may have taints (e.g., dedicated=production:NoSchedule) that repel Pods without corresponding tolerations in the manifest. If a Pod lacks tolerations for such taints, the scheduler will not place it.
  3. PersistentVolumeClaim (PVC) issues
    If a Pod requires a volume via volumeMounts and persistentVolumeClaim.claimName, and the corresponding PVC is in a Pending state (e.g., no available PV or storage class is unavailable), the Pod will also remain Pending.
  4. Node selector, affinity, or anti-affinity constraints
    Conditions in spec.nodeSelector, spec.affinity may be too restrictive. For example, if you specified nodeSelector: {disktype: ssd} but no node has that label, the Pod will not be scheduled.
  5. ResourceQuota exhaustion
    A ResourceQuota in the namespace may limit the total number of Pods, CPU, or memory. If the quota is exceeded, new Pods will remain Pending.
  6. Pod density limit on nodes
    The maxPods parameter in the kubelet configuration (default is usually 110) may be reached on nodes, preventing the scheduler from adding more Pods.
  7. No suitable nodes available
    All nodes may be cordoned for maintenance (e.g., via kubectl cordon) or be in a NotReady state.

Troubleshooting Steps

Step 1: Analyze Pod Events with kubectl describe

This is the first and most critical step. The kubectl describe pod command shows Events where the scheduler explains why it cannot place the Pod.

  1. Find the Pod name and namespace:
    kubectl get pods --all-namespaces | grep Pending
    
  2. Describe the Pod, replacing <pod-name> and <namespace>:
    kubectl describe pod <pod-name> -n <namespace>
    
  3. In the output, locate the Events section. Example message:
    Events:
      Type    Reason            Age   From               Message
      ----    ------            ----  ----               -------
      Normal  Scheduled         5m    default-scheduler  Successfully assigned default/myapp-5d89d7c8b9-abcde to node-1
      Warning FailedScheduling  4m    default-scheduler  0/3 nodes are available: 3 Insufficient cpu.
    

    Here, Insufficient cpu indicates a CPU shortage.
    Other possible messages:
    • 0/3 nodes are available: 3 node(s) had taint {key:value}, that the pod didn't tolerate.
    • 0/3 nodes are available: 2 node(s) had volume node affinity conflict.
    • persistentvolumeclaim "my-pvc" not found

    💡 Tip: If there are no events, the Pod may not yet have been processed by the scheduler. Wait 1-2 minutes and try again.

Step 2: Check Node Resource Availability

After identifying the cause from events (e.g., Insufficient memory), verify actual available resources.

  1. View overall node resource usage:
    kubectl top nodes
    

    Output:
    NAME     CPU(cores)   MEMORY(bytes)
    node-1   150m         1900Mi
    node-2   200m         2100Mi
    

    Note MEMORY(bytes)—if it's close to the node's limit, this could be the cause.
  2. Describe a specific node (e.g., node-1) in detail:
    kubectl describe node node-1
    

    In the Allocated resources section, you'll see how many resources are already allocated to Pods. Compare with Capacity.
  3. If resources are exhausted, you have options:
    • Increase cluster resources: add a new node (e.g., via kubectl scale --replicas=2 deployment/<deployment-name> if using a Deployment with autoscaling, or manually through your cloud provider).
    • Reduce Pod resource requests: edit the Pod manifest and lower resources.requests.cpu and resources.requests.memory values. For example, from 2Gi to 1Gi.

Step 3: Review and Adjust the Pod Manifest

Often, the issue lies in the Pod or Deployment manifest. Check these sections:

  1. Resource requests (resources.requests)
    Ensure requests do not exceed typical node resources. For example, if your nodes have 2 GB RAM, don't request 1.5 GB per container if you plan to run multiple containers.
    spec:
      containers:
      - name: myapp
        image: myapp:latest
        resources:
          requests:
            memory: "256Mi"   # ↓ reduce if needed
            cpu: "250m"
    
  2. Node selector and affinity
    Check for overly specific selectors:
    spec:
      nodeSelector:
        disktype: ssd   # Ensure nodes have this label: kubectl get nodes --show-labels
    

    If the label is missing, the Pod won't schedule. Either remove nodeSelector or add the label to a node: kubectl label node <node-name> disktype=ssd.
  3. Tolerations
    If nodes have taints (e.g., for dedicated workloads), the Pod must have matching tolerations:
    spec:
      tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "production"
        effect: "NoSchedule"
    
  4. Affinity/anti-affinity
    Complex affinity rules may fail to find suitable nodes. Temporarily simplify or remove the affinity section to test.

After making changes, apply the manifest:

kubectl apply -f pod.yaml

Step 4: Check PersistentVolumeClaim (PVC) and ResourceQuota

If the Pod uses storage, ensure the PVC is ready.

  1. Check PVC status:
    kubectl get pvc -n <namespace>
    

    If the PVC is Pending, there's a volume issue. Describe it:
    kubectl describe pvc <pvc-name> -n <namespace>
    

    Possible causes:
    • No available PersistentVolume (PV) with a matching storage class (storageClassName).
    • A PV exists but is already bound to another PVC.
    • The storage class is not supported in the cluster.

    Solution: create a suitable PV, change the storageClassName in the PVC, or use dynamic provisioning.
  2. Check ResourceQuota in the namespace:
    kubectl get resourcequota -n <namespace>
    kubectl describe resourcequota <quota-name> -n <namespace>
    

    In the output, look at hard and used limits. If used is close to hard, new Pods won't be created. Increase the quota or remove unused resources.

Prevention

To avoid Pods getting stuck in Pending in the future:

  • Set realistic resource requests: Test your application and base requests on actual usage, not with a 100% safety margin.
  • Use cluster autoscaling: If in the cloud, configure Cluster Autoscaler to add nodes when resources are low.
  • Monitor resource usage regularly: Tools like kubectl top, Prometheus, and Grafana help visualize trends and predict shortages.
  • Configure taints/tolerations and affinity correctly: Document which nodes are intended for which workloads and verify alignment.
  • Manage resource quotas: Set ResourceQuota in namespaces with some buffer, but not excessively high, to prevent resource "stampedes".
  • Verify storage status: Ensure storage classes (StorageClass) are available and PVs are created correctly.

Following these guidelines will significantly reduce the likelihood of Pods hanging in the Pending state and ensure stable application operation in Kubernetes.

F.A.Q.

What does the Pending state mean for a Pod in Kubernetes?
How to quickly find out why a Pod isn't starting?
Can insufficient memory on nodes cause Pending?
How to prevent a Pod from staying in Pending state?

Hints

Check Pod status
Inspect Pod events
Check node resources
Analyze Pod manifest
FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community