Linux ENOSPCHigh

Fixing 'No space left on device' Error in Linux: Causes and 5 Solutions

This article helps quickly diagnose and fix the critical 'No space left on device' error in Linux. Learn how to find large files and directories, and master several disk space cleanup methods.

Updated at February 15, 2026
10-15 min
Easy
FixPedia Team
Применимо к:Ubuntu 22.04+Debian 11+CentOS 8+RHEL 9+Fedora 36+Arch Linux

What the No space left on device Error Means

The No space left on device error (code ENOSPC) is a system message from the Linux kernel indicating that a write operation to disk cannot be performed because the target filesystem is completely full. It appears in the console or in application logs (e.g., web server, database, build system) when attempting to create, modify, or move a file.

Typical symptoms:

  • Running a command like touch newfile or installing a package via apt/yum results in the message No space left on device.
  • Failures in services that write logs or data (e.g., MySQL, PostgreSQL, Docker).
  • A web server (Nginx/Apache) returns a 500 error and logs disk full.
  • System notifications about low disk space (in GUI environments).

Common Causes

The error occurs for one primary reason — physical or logical filling of a disk partition to 100%. Specific scenarios include:

  1. The root partition (/) is full. A common issue on virtual servers or with careless package and log management.
  2. The /var or /var/log partition grows due to unlimited log storage, package cache, or application data (e.g., databases).
  3. Many "orphaned" files. Files that were deleted (rm) but are still open by a running process. They continue to occupy space until the process terminates.
  4. Huge dump or backup files left in home directories (/home) or in /tmp.
  5. The swap partition is full (rare, but possible during system instability).
  6. Root space reservation. By default in ext4/xfs, 5% of space is reserved for the root user. On large drives, this can be gigabytes inaccessible to regular users.

Solutions

Method 1: Diagnose and Find Large Files (Basic)

First, you need to understand what exactly is consuming space.

  1. Check all mounted filesystems:
    df -h
    

    The -h flag makes the output "human-readable" (in MB, GB). Find the partition with Use% at 100% or close to it. Usually this is /, /home, or /var.
  2. Identify the largest directories on that partition. For example, if the root (/) is full:
    sudo du -sh /* 2>/dev/null | sort -rh | head -n 20
    
    • sudo — needed to access all directories.
    • du -sh — calculates the total directory size (-s) in a readable format (-h).
    • /* — scan all top-level directories in root immediately.
    • 2>/dev/null — suppress "Permission denied" errors.
    • sort -rh — sort by size descending (reverse, human numeric).
    • head -n 20 — show the top 20.

    Result: you'll see a list like:
    45G /var
    12G /home
    8.5G    /usr
    ...
    

    Now drill down into the largest directory (e.g., /var):
    sudo du -sh /var/* 2>/dev/null | sort -rh | head -n 10
    
  3. Search for specific large files (if you need to find files, not directories):
    sudo find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null | awk '{ print $5, $9 }' | sort -rh | head -n 20
    

    This command finds all files larger than 100 MB and outputs their size and path, sorted descending.

Method 2: Remove Orphaned Files (deleted but still open)

Sometimes space is occupied by files that were logically deleted with rm but remain open by a running process. Such files do not appear in du, but df shows the space as used.

  1. Find these files:
    sudo lsof | grep '(deleted)'
    

    Or for a specific partition:
    sudo lsof / | grep '(deleted)'
    
  2. In the output, you'll see something like:
    COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
    mysqld   1234   mysql   8w   REG  253,0   10G 1234567 /var/lib/mysql/ibdata1 (deleted)
    

    Here, the file /var/lib/mysql/ibdata1 (10 GB) is deleted but still used by the mysqld process (PID 1234).
  3. Free the space. There are two approaches:
    • Restart the process (if possible without data loss): sudo systemctl restart mysql.
    • Truncate the file (more aggressive, but sometimes necessary): sudo cat /dev/null > /proc/1234/fd/8. This zeros the file descriptor. Warning: this may cause data corruption in the application! Use only if you're sure the process can recreate the file (e.g., a log file).

Method 3: Clean Package Manager Cache and Old Kernels

System package managers and kernel updates are frequent culprits for filling /var.

  • For Debian/Ubuntu:
    # Clear the cache of downloaded .deb packages
    sudo apt-get clean
    
    # Remove old, unnecessary packages and kernels
    sudo apt-get autoremove --purge
    
    # Additionally: remove old versions of packages from cache (if clean didn't work)
    sudo apt-get autoclean
    
  • For RHEL/CentOS/Fedora:
    # Clear all yum/dnf cache
    sudo yum clean all   # for CentOS 7
    sudo dnf clean all  # for CentOS 8+/Fedora
    
    # Remove old kernels (caution! keep at least 2: current and previous)
    sudo package-cleanup --oldkernels --count=2
    

Method 4: Clean System Logs and Temporary Files

  1. Clear current logs (if logrotate is not configured or broken):
    # Clear the main system log (usually the largest)
    sudo sh -c 'cat /dev/null > /var/log/syslog'
    # For journald (systemd logs), use the built-in utility instead
    sudo journalctl --vacuum-time=3d  # keep logs only for the last 3 days
    sudo journalctl --vacuum-size=100M  # keep no more than 100 MB
    
  2. Clean temporary files:
    # Clean /tmp (ensure no important files are there!)
    sudo rm -rf /tmp/*
    sudo rm -rf /var/tmp/*
    
    # Clean browser caches (if this is a workstation)
    rm -rf ~/.cache/mozilla/firefox/*.default/cache2/*
    rm -rf ~/.cache/google-chrome/Default/Cache/*
    

Method 5: Handling Docker (if used)

Docker is a known space consumer due to images, containers, and volumes.

  1. Remove unused images, containers, networks, and volumes:
    # Remove EVERYTHING not in use (caution!)
    docker system prune -a --volumes
    
    # Safer: first see what would be removed
    docker system prune -a --volumes --dry-run
    
  2. Clean specific resources:
    docker image prune -a          # all unused images
    docker container prune        # stopped containers
    docker volume prune           # unused volumes
    docker network prune          # unused networks
    
  3. Change Docker's storage driver (if space keeps filling). By default it's overlay2. For systems with limited space, you can configure devicemapper or mount /var/lib/docker on a separate disk.

Prevention

  1. Configure logrotate. Ensure configs in /etc/logrotate.conf and /etc/logrotate.d/ are active and correctly limit log size/age (e.g., size 100M, rotate 7).
  2. Set up monitoring. Add a cron task that sends an alert when a partition exceeds 85%:
    # /etc/cron.daily/disk-space-check
    #!/bin/bash
    THRESHOLD=85
    CURRENT=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
    if [ "$CURRENT" -ge "$THRESHOLD" ]; then
        echo "Critical: partition / is $CURRENT% full" | mail -s "Disk full on $(hostname)" admin@example.com
    fi
    
  3. Regularly run apt-get clean/yum clean all after mass updates.
  4. Use separate partitions or disks for /var, /home, /var/log on servers. This isolates the problem.
  5. Install ncdu — an interactive disk space analyzer (sudo ncdu /). It's more user-friendly than du for interactive exploration.

F.A.Q.

Why do `df` and `du` show different amounts of free space?
How to quickly find the largest files and directories on the root partition?
Can I increase partition size without reinstalling the system?
How to set up disk space monitoring to get early warnings?

Hints

Check current disk status
Find space-consuming directories and files
Clean package manager cache and old kernels
Delete large unnecessary files
Clear system logs (if needed)
FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community