Linux OOMHigh

Linux OOM Killer: Crash Causes and Quick Configuration

Explore the Linux OOM Killer mechanism: why the system forcefully terminates processes and how to configure memory priorities to protect critical services from sudden shutdowns.

Updated at April 6, 2026
10-15 min
Medium
FixPedia Team
Применимо к:Ubuntu 20.04/22.04/24.04Debian 11+CentOS/RHEL 8+Any distribution with Linux kernel 3.0+

What Does an OOM Error Mean

The OOM (Out of Memory) Killer is a built-in Linux kernel mechanism that triggers when the system completely exhausts its physical memory (RAM) and swap space. Instead of allowing the server to freeze completely, the kernel selects and forcibly terminates one or more processes to free up resources.

In the logs, this typically appears as a line like: Out of memory: Kill process <PID> (<process_name>) score <number> or sacrifice child. The process doesn't just crash—it is terminated by the SIGKILL (9) signal, which cannot be intercepted. Therefore, you won't see standard error messages in the application's own logs, only an abrupt termination.

Common Causes

  1. Memory Leak: The application gradually consumes more RAM without releasing it. Over time, no free memory remains.
  2. Incorrect Server Configuration: Too few resources are allocated for the web server or database, or configuration files specify excessively high limits for workers or buffers.
  3. Missing or Insufficient Swap Space: If physical memory runs out and swap is not configured, the kernel instantly triggers the OOM Killer without attempting to use disk space.
  4. Sudden Load Spike: A sharp influx of traffic, execution of heavy scripts, or compiling software (e.g., npm install or make) on a low-end VPS.

How to Fix It

Method 1: Analyzing Logs and Identifying the Victim

Before changing any settings, verify that the OOM Killer is indeed the culprit and identify which process was affected.

  1. Open a terminal and run the following command to view kernel messages:
sudo dmesg -T | grep -i "out of memory"
  1. If the output is empty, check the system journal:
journalctl -k | grep -i oom
  1. Look for lines containing Killed process [PID] (process_name). Pay attention to the oom_score value—the higher it is, the more likely the process will be killed next time.

💡 Tip: If a process is being killed regularly, set up memory monitoring (e.g., using htop or free -m) to track the exact moment consumption spikes.

Method 2: Adjusting Priorities via oom_score_adj

The Linux kernel evaluates each process on a scale from -1000 to 1000. A value of -1000 completely protects a process from the OOM Killer, while 1000 makes it the primary target. The default value is 0.

  1. Find the PID of the critical process (e.g., your database):
pidof mysqld
  1. Change its priority by writing a new value to the special file:
echo -500 | sudo tee /proc/<PID>/oom_score_adj
  1. To make this setting persist across reboots, add a rule to systemd for your service. Open the service configuration override:
sudo systemctl edit mysqld.service
  1. Insert the following lines and save the file:
[Service]
OOMScoreAdjust=-500
  1. Reload the daemon configuration: sudo systemctl daemon-reload.

Method 3: Optimizing Swap and Kernel Parameters

If the server frequently hits RAM limits, adding swap space gives the kernel more time to react and reduces the frequency of OOM Killer triggers.

  1. Create a 2 GB swap file:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
  1. Add it to /etc/fstab so it activates on boot:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
  1. Configure swap usage aggressiveness via the vm.swappiness parameter. A value of 10 means the system will prioritize using RAM and only resort to disk when absolutely necessary:
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf

⚠️ Important: Do not disable the OOM Killer entirely by setting vm.panic_on_oom=1. This will force the kernel to reboot the server when memory runs out, leading to downtime and potential data loss.

Method 4: Limiting Consumption via cgroups or systemd

The best way to prevent the OOM Killer from triggering is to set strict memory limits for services in advance. systemd allows you to do this without manually configuring complex cgroups.

  1. Open the configuration for the service that is consuming too much memory:
sudo systemctl edit apache2.service
  1. Add memory limit directives:
[Service]
MemoryMax=1G
MemorySwapMax=500M

This prevents the process from exceeding 1 GB of RAM and 500 MB of swap. When the limit is reached, the service will be gracefully stopped or restarted instead of killing neighboring processes. 3. Apply the changes: sudo systemctl restart apache2.

Prevention

Regularly update server and application packages: developers frequently fix memory leaks in newer versions. Configure automatic service restarts on failure (Restart=always in systemd units) so critical applications recover immediately after an unexpected termination. Use monitoring tools (Prometheus, Netdata, or Zabbix) with alerts set at 85% RAM usage. This gives you a buffer to intervene manually before the Linux kernel starts "cleaning up" processes on its own.

F.A.Q.

Why does a process terminate without an error message?
Can I completely disable the OOM Killer?
How do I find out which process the OOM Killer terminated?

Hints

Check system logs
Configure process priorities
Add or increase the swap file
Limit service memory consumption
FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community