What is the OOM Killer in Linux?
OOM Killer (Out-Of-Memory Killer) is a Linux kernel mechanism that is triggered when the system has exhausted all available RAM and swap space and cannot allocate memory for a new request. Its task is to forcibly terminate one or more processes to free up memory and prevent a complete system crash.
The mechanism operates based on a victim selection algorithm that scores each process (oom_score). The process with the highest score is considered the least important and is selected for termination.
How Does the OOM Killer Work?
- Critical memory shortage: The kernel determines that free memory (including swap) is insufficient to satisfy a request.
- Score calculation: An
oom_scoreis calculated for each process based on:- Amount of memory used (RSS).
- Process priority (nice value).
- Runtime (newer processes may have an advantage).
- Whether the process is a child of
init(PID 1) — such processes are less likely to be killed. oom_score_adjsettings (manual adjustment).
- Selection and killing: The process with the highest score receives a
SIGKILLsignal and is terminated. - Logging: The event is recorded in the kernel system log (
dmesg).
How to Diagnose OOM Killer Activity?
1. Check the System Log
Look for entries containing keywords:
dmesg | grep -i "killed process"
# Example output:
# [12345.678] Out of memory: Kill process 1234 (some_process) score 500 or sacrifice child
# [12345.679] Killed process 1234 (some_process) total-vm:123456kB, anon-rss:98765kB, file-rss:1234kB
Also check logs:
grep -i "oom" /var/log/syslog /var/log/messages
journalctl -k | grep -i "oom"
2. Analyze Scores (oom_score)
View the current score for all processes:
for pid in $(ps -e -o pid=); do
echo "PID $pid ($(ps -p $pid -o comm=)): score $(cat /proc/$pid/oom_score 2>/dev/null || echo N/A)"
done | sort -k3 -n -r | head -20
How to Prevent OOM Killer Activation?
1. Increase Available Resources
- Add RAM — physically expand memory.
- Configure swap:
sudo fallocate -l 8G /swapfile # Create an 8 GB swap file sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab - Configure
vm.swappinessparameters (in/etc/sysctl.conf):
Apply:vm.swappiness=10 # Reserve RAM before using swap (default is 60)sudo sysctl -p
2. Adjust Process Priorities
Temporary adjustment via /proc
# Reduce the chance of being killed for PID 1234 (value from -1000 to 1000)
echo -500 | sudo tee /proc/1234/oom_score_adj
Permanent configuration via systemd
For a systemd-managed service:
# /etc/systemd/system/myservice.service.d/oom.conf
[Service]
OOMScoreAdjust=-900
Reload: sudo systemctl daemon-reload && sudo systemctl restart myservice
3. Control Memory Consumption
- Use
cgroups(v2) to limit memory for containers/processes. - Configure limits in applications (e.g.,
JAVA_OPTSfor Java,worker_processesfor Nginx). - Disable unnecessary services.
4. Monitoring and Alerts
Set up alerts when a threshold is reached (e.g., 80% RAM+swap usage):
- Tools:
netdata,prometheus+alertmanager,zabbix. - Simple check script:
#!/bin/bash USAGE=$(free | awk '/Mem:/ {printf("%.0f"), $3/$2 * 100}') if [ $USAGE -gt 85 ]; then echo "Warning: memory usage ${USAGE}%" | wall # Additional actions: send email, restart service fi
5. Kernel Behavior Tuning (Advanced)
vm.overcommit_memory:0(default) — heuristic overcommit.1— always allow overcommit (risk of OOM).2— do not allow overcommit ifvm.overcommit_ratiois insufficient (recommended for critical systems).
sudo sysctl -w vm.overcommit_memory=2 sudo sysctl -w vm.overcommit_ratio=50 # Allow overcommit only up to 50% of RAM+swapvm.panic_on_oom:0(default) — run OOM Killer.1— kernel panic on OOM (requires reboot).2— kernel panic always, but only ifpanic_on_oom=2(rarely used).
Advanced Tools
1. systemd-oomd (Modern Alternative)
In modern distributions (Ubuntu 22.04+, Fedora), the systemd-oomd daemon may run, which manages memory more intelligently and can terminate processes before the kernel's OOM Killer triggers. Configure via /etc/systemd/oomd.conf.
2. earlyoom
A simple daemon that monitors memory and RAM+swap, killing processes when thresholds are reached (e.g., 90% and 80% respectively). Installation:
sudo apt install earlyoom # Debian/Ubuntu
sudo systemctl enable --now earlyoom
Configure via arguments in /etc/default/earlyoom or the systemd unit.
3. ps_mem — Memory Usage Analysis
Install this utility for accurate per-process memory accounting:
sudo apt install ps_mem # Debian/Ubuntu
ps_mem
Common Issues and Solutions
| Symptom | Possible Cause | Solution |
|---|---|---|
OOM Killer kills mysqld or postgres | Buffer pool too large, insufficient RAM | Reduce innodb_buffer_pool_size (MySQL) or shared_buffers (PostgreSQL), add RAM. |
| OOM Killer triggers during compilation | make -j uses too much memory | Limit parallel jobs: make -j$(nproc --all) → make -j2 or make -j$(($(nproc --all)/2)). |
| No swap, system "freezes" before OOM | Kernel cannot allocate memory and blocks | Add a swap file/partition. |
Incorrect oom_score for a process | Manual oom_score_adj or cgroup settings | Check values: cat /proc/<PID>/oom_score_adj. Set appropriate values. |
Conclusion
The OOM Killer is the last line of defense against complete memory exhaustion. While its activation may seem catastrophic, it prevents a total system crash. Key steps for managing it:
- Monitor: track memory usage.
- Prevent: configure swap, limit application consumption.
- Tune: use
oom_score_adjto protect critical processes. - Analyze: always review logs after activation.
Remember: the best defense is to have sufficient RAM + swap and control application behavior.