What Does an OOM Error Mean
The OOM (Out of Memory) Killer is a built-in Linux kernel mechanism that triggers when the system completely exhausts its physical memory (RAM) and swap space. Instead of allowing the server to freeze completely, the kernel selects and forcibly terminates one or more processes to free up resources.
In the logs, this typically appears as a line like: Out of memory: Kill process <PID> (<process_name>) score <number> or sacrifice child. The process doesn't just crash—it is terminated by the SIGKILL (9) signal, which cannot be intercepted. Therefore, you won't see standard error messages in the application's own logs, only an abrupt termination.
Common Causes
- Memory Leak: The application gradually consumes more RAM without releasing it. Over time, no free memory remains.
- Incorrect Server Configuration: Too few resources are allocated for the web server or database, or configuration files specify excessively high limits for workers or buffers.
- Missing or Insufficient Swap Space: If physical memory runs out and swap is not configured, the kernel instantly triggers the OOM Killer without attempting to use disk space.
- Sudden Load Spike: A sharp influx of traffic, execution of heavy scripts, or compiling software (e.g.,
npm installormake) on a low-end VPS.
How to Fix It
Method 1: Analyzing Logs and Identifying the Victim
Before changing any settings, verify that the OOM Killer is indeed the culprit and identify which process was affected.
- Open a terminal and run the following command to view kernel messages:
sudo dmesg -T | grep -i "out of memory"
- If the output is empty, check the system journal:
journalctl -k | grep -i oom
- Look for lines containing
Killed process [PID] (process_name). Pay attention to theoom_scorevalue—the higher it is, the more likely the process will be killed next time.
💡 Tip: If a process is being killed regularly, set up memory monitoring (e.g., using
htoporfree -m) to track the exact moment consumption spikes.
Method 2: Adjusting Priorities via oom_score_adj
The Linux kernel evaluates each process on a scale from -1000 to 1000. A value of -1000 completely protects a process from the OOM Killer, while 1000 makes it the primary target. The default value is 0.
- Find the PID of the critical process (e.g., your database):
pidof mysqld
- Change its priority by writing a new value to the special file:
echo -500 | sudo tee /proc/<PID>/oom_score_adj
- To make this setting persist across reboots, add a rule to
systemdfor your service. Open the service configuration override:
sudo systemctl edit mysqld.service
- Insert the following lines and save the file:
[Service]
OOMScoreAdjust=-500
- Reload the daemon configuration:
sudo systemctl daemon-reload.
Method 3: Optimizing Swap and Kernel Parameters
If the server frequently hits RAM limits, adding swap space gives the kernel more time to react and reduces the frequency of OOM Killer triggers.
- Create a 2 GB swap file:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
- Add it to
/etc/fstabso it activates on boot:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
- Configure swap usage aggressiveness via the
vm.swappinessparameter. A value of10means the system will prioritize using RAM and only resort to disk when absolutely necessary:
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
⚠️ Important: Do not disable the OOM Killer entirely by setting
vm.panic_on_oom=1. This will force the kernel to reboot the server when memory runs out, leading to downtime and potential data loss.
Method 4: Limiting Consumption via cgroups or systemd
The best way to prevent the OOM Killer from triggering is to set strict memory limits for services in advance. systemd allows you to do this without manually configuring complex cgroups.
- Open the configuration for the service that is consuming too much memory:
sudo systemctl edit apache2.service
- Add memory limit directives:
[Service]
MemoryMax=1G
MemorySwapMax=500M
This prevents the process from exceeding 1 GB of RAM and 500 MB of swap. When the limit is reached, the service will be gracefully stopped or restarted instead of killing neighboring processes.
3. Apply the changes: sudo systemctl restart apache2.
Prevention
Regularly update server and application packages: developers frequently fix memory leaks in newer versions. Configure automatic service restarts on failure (Restart=always in systemd units) so critical applications recover immediately after an unexpected termination. Use monitoring tools (Prometheus, Netdata, or Zabbix) with alerts set at 85% RAM usage. This gives you a buffer to intervene manually before the Linux kernel starts "cleaning up" processes on its own.