Introduction / Why This Is Needed
Free disk space on a Linux server or workstation gradually fills up with temporary files, application caches, old logs, and debug dumps. Manual cleanup is time-consuming and often forgotten, leading to root partition (/) filling up and system failures.
This guide will show you how to set up safe automatic disk cleanup using cron or systemd-timer. You will be able to:
- Regularly delete files in
/tmp,/var/tmp - Clean old logs from
/var/log(considering rotation policies) - Remove outdated package manager cache (
apt,dnf,pacman) - Control the size of user home directories
All actions are safe and use file age as the criterion.
Requirements / Preparation
Before setup, ensure:
- You have
sudoprivileges orrootaccess. - Disk analysis utilities are installed (recommended:
ncdu):# Ubuntu/Debian sudo apt update && sudo apt install ncdu # Fedora/CentOS/RHEL sudo dnf install ncdu # Arch sudo pacman -S ncdu - You know which directories on your system occupy the most space. Run:
Or, ifsudo ncdu /ncduis not available:sudo du -h --max-depth=1 / 2>/dev/null | sort -hr
Step 1: Creating the Cleanup Script
Create a file, for example, /usr/local/bin/disk-cleanup.sh:
#!/usr/bin/env bash
# disk-cleanup.sh — automatic disk cleanup in Linux
# Removes temporary files, old logs, and cache older than N days.
# Safety: checks that target directories exist.
set -euo pipefail # Safe bash mode
# Configuration (adjust to your needs)
TMP_DIRS=("/tmp" "/var/tmp")
LOG_DIRS=("/var/log")
CACHE_DIRS=(
"/var/cache/apt/archives/partial"
"/var/cache/dnf"
"/var/cache/pacman/pkg"
"/root/.cache"
)
DAYS_OLD=7 # Delete files older than 7 days
MIN_FREE_PERCENT=10 # Minimum free percentage to trigger cleanup (optional)
# Function to safely delete files older than N days
clean_directory() {
local dir="$1"
local days="$2"
if [[ ! -d "$dir" ]]; then
echo "⚠️ Directory $dir does not exist, skipping."
return 0
fi
echo "🧹 Cleaning $dir (files older than $days days)..."
# Find and delete files, but not directories themselves
find "$dir" -type f -mtime +"$days" -print -delete 2>/dev/null || true
# Delete empty directories (depth 1, to avoid affecting structure)
find "$dir" -mindepth 1 -depth -type d -empty -delete 2>/dev/null || true
}
# Function to clean package manager caches
clean_package_cache() {
echo "📦 Cleaning package manager caches..."
# APT (Debian/Ubuntu)
if command -v apt-get &>/dev/null; then
echo " → APT: removing outdated .deb files..."
apt-get clean -y 2>/dev/null || true
fi
# DNF (Fedora/RHEL/CentOS)
if command -v dnf &>/dev/null; then
echo " → DNF: removing outdated packages..."
dnf clean all -y 2>/dev/null || true
fi
# Pacman (Arch)
if command -v pacman &>/dev/null; then
echo " → Pacman: removing outdated packages..."
pacman -Sc --noconfirm 2>/dev/null || true
fi
}
# Main logic
echo "🚀 Starting disk cleanup $(date)"
# 1. Temporary files
for dir in "${TMP_DIRS[@]}"; do
clean_directory "$dir" "$DAYS_OLD"
done
# 2. Package manager caches
clean_package_cache
# 3. Logs (only old files, do not touch current ones)
for dir in "${LOG_DIRS[@]}"; do
# Delete compressed and old logs, but keep the latest 7 files
if [[ -d "$dir" ]]; then
echo "📄 Cleaning logs in $dir (files older than $DAYS_OLD days)..."
find "$dir" -type f -name "*.log.*" -mtime +"$DAYS_OLD" -print -delete 2>/dev/null || true
find "$dir" -type f -name "*.gz" -mtime +"$DAYS_OLD" -print -delete 2>/dev/null || true
fi
done
# 4. Optional: user caches (use with caution!)
# Uncomment if needed:
# clean_directory "/home/*/.cache" "$DAYS_OLD"
echo "✅ Cleanup completed $(date)"
echo "📊 Current disk usage:"
df -h / 2>/dev/null || true
Make the script executable:
sudo chmod +x /usr/local/bin/disk-cleanup.sh
Script notes:
set -euo pipefail— safe mode: script stops on errors.DAYS_OLD=7— adjust to your policy (e.g., 3 days for/tmp, 30 for logs).- We do not delete directories, only files and empty subdirectories.
apt-get cleanremoves local.debcache but not configurations.- For browser caches (
~/.cache/mozilla,~/.cache/google-chrome), add the path toCACHE_DIRS, but test carefully.
Step 2: Setting Up Automatic Execution via cron
Simple and universal method.
- Open root's crontab:
sudo crontab -e - Add a line for daily execution at 2:00 AM:
0 2 * * * /usr/local/bin/disk-cleanup.sh >> /var/log/disk-cleanup.log 2>&1
Explanation:0 2 * * *— every day at 02:00.>> /var/log/disk-cleanup.log 2>&1— all output (stdout and stderr) is written to the log.- The log can be rotated via
logrotate(see Step 4).
- Save and exit. Cron automatically applies the schedule.
Alternative: systemd-timer (recommended for modern systems)
If you use systemd (Ubuntu 22.04+, Fedora, CentOS 8+), this is more flexible and reliable.
- Create a service file:
sudo tee /etc/systemd/system/disk-cleanup.service > /dev/null <<'EOF' [Unit] Description=Disk Cleanup Service Wants=network-online.target After=network-online.target [Service] Type=oneshot ExecStart=/usr/local/bin/disk-cleanup.sh # Run as root, but can run as another user via User= # User=someuser # Group=somegroup EOF - Create a timer file:
sudo tee /etc/systemd/system/disk-cleanup.timer > /dev/null <<'EOF' [Unit] Description=Run disk cleanup daily Requires=disk-cleanup.service [Timer] OnCalendar=daily Persistent=true # RandomDelaySec=1h # Uncomment to distribute load # Ensure time does not conflict with other tasks [Install] WantedBy=timers.target EOF - Enable and start:
sudo systemctl daemon-reload sudo systemctl enable --now disk-cleanup.timer - Check status:
sudo systemctl status disk-cleanup.timer sudo systemctl list-timers | grep disk-cleanup
Advantages of systemd-timer:
- Logs automatically go to
journalctl:sudo journalctl -u disk-cleanup.service. - Timer will not run if the system was off at the scheduled time (
Persistent=true). - You can set
RandomDelaySecto distribute load.
Step 3: Configuring Exclusions and Fine-Tuning
Adding Exclusions (whitelist)
If there are important files in the cleaned directories, create an exclusion list:
# Add at the beginning of the script (after set):
EXCLUDE_PATTERNS=(
"/tmp/important-file"
"/var/log/secure" # Do not delete authentication logs!
"/var/log/auth.log"
)
# Then in clean_directory after find, add:
# Example for /tmp:
find "$dir" -type f -mtime +"$days" ! -path "${EXCLUDE_PATTERNS[@]/#/}" -print -delete
⚠️ Important: Never exclude entire directories like
/var/log— this will break log rotation. Exclude only specific files.
Distribution-Specific Adjustments
For Ubuntu/Debian (also clean Snap cache):
# Add after clean_package_cache in the script:
if [[ -d "/var/lib/snapd/cache" ]]; then
find "/var/lib/snapd/cache" -type f -mtime +30 -delete
fi
For servers with small /tmp in RAM (tmpfs):
# Usually /tmp is cleared on reboot. But if needed:
clean_directory "/tmp" 1 # Only files older than 1 day
Step 4: Verifying Results and Monitoring
Test Run
# Run the script manually:
sudo /usr/local/bin/disk-cleanup.sh
# Or for systemd:
sudo systemctl start disk-cleanup.service
sudo systemctl status disk-cleanup.service
Check what was deleted
# See how much space was freed:
df -h /
# Or by directory:
sudo du -sh /tmp /var/log 2>/dev/null
Monitoring via logs
- Cron:
sudo tail -f /var/log/disk-cleanup.log - Systemd:
sudo journalctl -u disk-cleanup.service -f
Set up free space alerts (optional)
Add free space check and alerting to the script:
MIN_FREE_GB=5 # Minimum 5 GB free
FREE_SPACE=$(df --output=avail / | tail -n1) # in 1K blocks
FREE_SPACE_GB=$((FREE_SPACE / 1024 / 1024))
if [[ $FREE_SPACE_GB -lt $MIN_FREE_GB ]]; then
echo "⚠️ Low disk space: ${FREE_SPACE_GB}GB" | \
mail -s "Critical: low space on $(hostname)" admin@example.com
fi
Step 5: Log Rotation for the Script (if using cron)
Create a logrotate config:
sudo tee /etc/logrotate.d/disk-cleanup > /dev/null <<'EOF'
/var/log/disk-cleanup.log {
weekly
rotate 4
compress
missingok
notifempty
create 640 root adm
}
EOF
For systemd, logs are managed via journalctl and do not require separate rotation.
Verification
After setup:
- Ensure the script runs automatically:
# For cron: sudo grep disk-cleanup /var/log/syslog | tail # For systemd-timer: sudo systemctl list-timers | grep disk-cleanup - Check that space is freed:
# Record current usage: df -h / > /tmp/disk-before.txt # After a day: df -h / > /tmp/disk-after.txt diff /tmp/disk-before.txt /tmp/disk-after.txt - Ensure important files were not deleted (especially in
/var/logand/tmp).
Troubleshooting
❌ "Permission denied" when deleting files
Cause: Script not run as root, but some directories (/var/log, /tmp) require elevated privileges.
Solution: Run script via sudo (cron as root, systemd service without User=).
❌ Needed files are deleted (e.g., active sessions in /tmp)
Cause: DAYS_OLD is too small or no exclusions.
Solution:
- Increase
DAYS_OLDto 7-10 for/tmp. - Add exclusions via
! -pathinfind. - Test with
findwithout-deleteto see the list first.
❌ systemd-timer does not run
Cause: Timer not enabled or time conflict. Solution:
sudo systemctl enable disk-cleanup.timer
sudo systemctl list-timers | grep disk-cleanup # Check next run time
❌ Too aggressive package cache cleanup
Cause: apt-get clean removes all .deb files, including those needed for rollback.
Solution: Use apt-get autoclean instead of clean — removes only outdated files (not packages that can still be installed).
Change the line in the script:
apt-get autoclean -y 2>/dev/null || true
❌ Errors when cleaning browser caches
Cause: Browsers may lock files or use complex structures.
Solution: Do not clean browser caches automatically. Rely on built-in functions (e.g., "Clear History" in Firefox). If absolutely necessary, delete only files older than 30 days and only in ~/.cache/mozilla/firefox/*.default/cache2/.
❌ No space after cleanup
Cause: Problem is not temporary files but large data (application logs, databases, media files).
Solution: Use ncdu to find "heavy" directories. You may need:
- Archiving old data.
- Configuring log rotation for specific applications (e.g., MySQL, Docker).
- Increasing disk size.
❌ Cron does not send error emails
Cause: By default, cron sends mail to the local user, but MTA is not configured. Solution:
- Configure
ssmtp/msmtpfor sending. - Or redirect errors to a log (as in the example) and monitor it.
- Or use
systemd— logs go tojournalctl.
Additional Features
Cleaning Docker Objects (if Docker is installed)
Add to the script (before clean_package_cache):
if command -v docker &>/dev/null; then
echo "🐳 Cleaning Docker..."
# Remove stopped containers, dangling images, networks, volumes
docker system prune -af --volumes 2>/dev/null || true
fi
💡 Tip: For production servers, use
docker system prunecautiously. Better to set--filter "until=24h"or delete only dangling images (docker image prune -f).
Cleaning Old Kernels (Ubuntu/Debian)
The number of installed kernels can occupy gigabytes. Add:
# Removes old kernels, keeping the latest 2
if command -v apt-get &>/dev/null; then
echo "🔄 Cleaning old kernels..."
# Option 1: automatic (use with caution!)
# apt-get autoremove --purge -y
# Option 2: manual list (recommended)
# dpkg -l 'linux-image-*' | grep '^ii' | awk '{print $2}' | \
# grep -v "$(uname -r)" | grep -v "$(uname -r | sed 's/-generic//')" | \
# xargs sudo apt-get purge -y 2>/dev/null || true
fi
⚠️ Important: Do not remove the current kernel (
uname -r). Better to keep the last 2.
Integration with tmpreaper or tmpwatch
Some distributions (RHEL, CentOS) use tmpwatch to clean /tmp. Instead of a custom script, you can configure it:
# For CentOS/RHEL:
sudo yum install tmpwatch
sudo tee /etc/cron.daily/tmpwatch <<'EOF'
#!/bin/sh
/usr/sbin/tmpwatch 24 /tmp
/usr/sbin/tmpwatch 48 /var/tmp
EOF
sudo chmod +x /etc/cron.daily/tmpwatch
But our script is more versatile and includes package cache.
Conclusion (do not add as a separate section)
Automating disk cleanup is an essential practice for maintaining Linux system health. You have set up:
- A flexible script with settings for your distribution.
- Execution via cron or systemd-timer.
- Logging and monitoring.
Next steps:
- Test the script in
--dry-runmode (addechoinstead of-delete). - Set up alerts for free space (e.g., via
monitorzabbix). - For servers, consider centralized log collection (ELK, Graylog) — this reduces load on local disks.
If space is still running out, look for "heavy" directories with ncdu — the problem may be in data, not temporary files.