LinuxHigh

CrashLoopBackOff in Kubernetes: Causes and Fixes

This article explains what the CrashLoopBackOff state means in Kubernetes and provides step-by-step instructions for diagnosing and fixing pod restart issues.

Updated at February 17, 2026
10-15 min
Medium
FixPedia Team
Применимо к:Kubernetes 1.20+Docker, containerdkubelet

What the CrashLoopBackOff Error Means

CrashLoopBackOff is a Kubernetes pod state where a container inside it repeatedly fails (exit code ≠ 0), and the kubelet unsuccessfully attempts to restart it. After each failed attempt, the delay before the next restart increases exponentially (10s, 20s, 40s, etc.), which is reflected in the state's name.

The error appears in the output of kubectl get pods:

NAME                     READY   STATUS             RESTARTS   AGE
my-app-5d89d7c8b9-xyz   0/1     CrashLoopBackOff   5          2m

This is not a Kubernetes error itself, but an indicator that the container cannot start reliably. The problem always lies in the pod configuration or the application itself.

Common Causes

  1. Application error — the application crashes on startup (e.g., due to incorrect arguments, missing environment variables, code errors).
  2. Insufficient resources — the container lacks memory (OOMKilled) or CPU, causing a forced termination.
  3. Incorrect startup command — the pod manifest specifies invalid command or args, or the image lacks an ENTRYPOINT.
  4. Image issues — non-existent image tag, authentication error in a private registry, or a corrupted image.
  5. Unavailable volumes — a volume (PersistentVolumeClaim) is not mounted, unavailable, or has incorrect permissions.
  6. Insufficient privileges — the container lacks permissions (SecurityContext, ServiceAccount) to access resources (e.g., a port <1024).
  7. Port conflict — the application tries to listen on a port already occupied by another process in the container or on the host.
  8. Initialization problems — init containers failed, failing to provide necessary resources.

Method 1: Diagnosis via Logs and Pod Description

The first and most crucial step is to gather information about the error.

  1. Find the pod name:
    kubectl get pods
    

    Locate the pod with the CrashLoopBackOff status. Note its full name (e.g., my-app-5d89d7c8b9-xyz).
  2. View logs from the previous container instance:
    kubectl logs <pod-name> --previous
    

    The --previous flag shows logs from the container that already terminated. If the pod is multi-container, specify the container name: -c <container-name>.
  3. Examine the pod's Events:
    kubectl describe pod <pod-name>
    

    In the output, find the Events section. Common errors:
    • OOMKilled — memory shortage.
    • Failed or ErrImagePull — image problems.
    • FailedMount — volume mounting error.
    • Exited with a code — application error.
  4. If the application logs to stdout/stderr, they will appear in kubectl logs. If logs are written to a file inside the container, try connecting to the crashed container (if possible):
    kubectl exec -it <pod-name> -- /bin/sh
    

    However, with CrashLoopBackOff, the container may be unavailable for exec. In this case, use --previous for logs or temporarily modify the manifest to prevent the container from terminating (e.g., run sleep infinity).

Method 2: Checking and Increasing Resources (Memory/CPU)

A frequent cause is the container running out of memory, causing the kernel to kill the process (OOMKilled).

  1. In kubectl describe pod <pod-name>, look for:
    Limits:
      memory:  256Mi
    Requests:
      memory:  128Mi
    

    If the memory limit is too low (e.g., 128Mi) for a Java application or database, increase it.
  2. Modify the pod manifest (deployment/statefulset):
    spec:
      containers:
      - name: my-app
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"
          requests:
            memory: "256Mi"
            cpu: "250m"
    

    Apply the changes:
    kubectl apply -f deployment.yaml
    
  3. If the issue is CPU, increase limits.cpu and requests.cpu. Note that CPU limits can cause throttling but not OOMKilled.
  4. For diagnostics, you can temporarily remove limits (not for production!) to check if the error disappears:
    resources:
      limits: {}
      requests: {}
    

Method 3: Checking the Startup Command and Arguments

If the application requires specific arguments or environment variables that aren't provided, it may crash.

  1. Check which command is executed in the container:
    kubectl describe pod <pod-name> | grep -A5 "Command:"
    kubectl describe pod <pod-name> | grep -A5 "Args:"
    

    If the command or arguments are incorrect, the container will terminate immediately.
  2. Example problem:
    The Dockerfile specifies ENTRYPOINT ["java", "-jar", "app.jar"], but the pod manifest overrides command: ["python"] — this will cause an error.
  3. Fix the manifest:
    spec:
      containers:
      - name: my-app
        image: my-image:latest
        command: ["java"]          # optional, if you want to override ENTRYPOINT
        args: ["-jar", "app.jar"] # optional, if you want to override CMD
        env:
        - name: DB_HOST
          value: "postgres"
    

    Ensure the command exists in the image (check the Dockerfile).
  4. If the application depends on environment variables, add all required ones in the env section. Check for typos.

Method 4: Checking the Image and Its Availability

Image issues are one of the most common causes of CrashLoopBackOff.

  1. Verify the image exists and is accessible:
    kubectl describe pod <pod-name> | grep -i "image"
    

    Look for events like ErrImagePull or ImagePullBackOff.
  2. Ensure the image tag is correct:
    • Avoid using latest in production (it can change).
    • Check that the image exists in the registry: docker pull <image:tag> (if you have access).
  3. If the image is in a private registry, ensure you have created an imagePullSecret:
    kubectl get secret
    

    If the secret doesn't exist, create it:
    kubectl create secret docker-registry regcred \
      --docker-server=<registry-server> \
      --docker-username=<username> \
      --docker-password=<password>
    

    And add it to the pod/deployment:
    spec:
      imagePullSecrets:
      - name: regcred
    
  4. Check that the image is compatible with the node's architecture. For example, an amd64 image won't run on an arm64 node.

Method 5: Checking Volumes and SecurityContext

If the container fails to mount a volume or access a resource due to permissions, it will crash.

  1. Check for FailedMount events:
    kubectl describe pod <pod-name> | grep -A10 "Events"
    

    Errors like MountVolume.SetUp or failed to mount volumes indicate issues with the PVC or volume configuration.
  2. Ensure the PVC exists and is in Bound status:
    kubectl get pvc
    
  3. Review SecurityContext:
    • If the container runs as root (UID 0) but the application requires a non-privileged user, configure runAsUser.
    • If you need to listen on a port <1024, CAP_NET_BIND_SERVICE is required:
      securityContext:
        capabilities:
          add: ["NET_BIND_SERVICE"]
      
    • Verify fsGroup is set correctly for volume access.
  4. Example fix for a volume:
    spec:
      containers:
      - name: my-app
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: my-pvc
    

    Ensure my-pvc exists and has sufficient capacity.

Prevention

  1. Test the image locally before deployment:
    docker run --rm -it <image:tag> <command>
    

    Ensure the container starts and doesn't crash.
  2. Specify explicit commands and arguments in the manifest, even if they match the Dockerfile. This eliminates implicit dependencies.
  3. Configure reasonable resource limits based on monitoring (e.g., via kubectl top pod). Start with a 20-30% buffer.
  4. Use readiness/liveness probes — they won't prevent CrashLoopBackOff, but help distinguish startup issues from runtime health problems.
  5. Enable logging to stdout/stderr and collect it via a DaemonSet (Fluentd, Filebeat) — this simplifies diagnostics.
  6. Validate manifests with kubectl apply --dry-run=client before applying.
  7. For critical applications, consider using restartPolicy: OnFailure (for Jobs) or configuring backoffLimit in a Job to limit restart attempts.

💡 Tip: If the application only crashes in Kubernetes but works locally, compare the environments: variables, volumes, network, node architecture. Often, the issue stems from differences between docker run and a Kubernetes pod.

F.A.Q.

What does CrashLoopBackOff mean in Kubernetes?
How to view pod logs in CrashLoopBackOff state?
Can automatic restarts be disabled during CrashLoopBackOff?
Common causes of CrashLoopBackOff in Kubernetes?

Hints

Get the pod name and check its status
View container logs
Analyze pod description
Check resources and startup commands
Increase resources or fix configuration

Did this article help you solve the problem?

FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community