What the OOMKilled Error Means
The OOMKilled (Out Of Memory Killed) error occurs when the Linux kernel forcibly terminates a process in a Docker container due to insufficient RAM. This triggers the OOM killer (Out-Of-Memory killer) mechanism.
The container exits with code 137 (128 + 9, where 9 is the SIGKILL signal). In Docker logs you will see:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1b2c3d4e5f6 my-app:latest "python app.py" 2 hours ago Exited (137) 5 minutes ago my-app
$ docker logs my-app
... (application logs) ...
Killed
The error is typical for:
- Containers with high memory consumption (databases, data processing, Java applications).
- Systems with limited RAM (e.g., small cloud instances).
- Scenarios where multiple containers compete for memory.
Causes
- Insufficient RAM on the host. The total memory consumption by all containers and systems exceeds available physical memory.
- No memory limits set for the container. If
--memoryis not specified, the container can use all free host RAM, leading to OOM. - Memory leaks in the application. The program inside the container gradually consumes more and more memory (e.g., due to not releasing resources).
- Misconfigured swap. The host may have insufficient or no swap space, accelerating RAM exhaustion.
- Aggressive OOM killer settings. The kernel may kill containers with a high
oom_score(by default, containers have a higher score than system processes). - Running a container without memory-swap limits. If only
--memoryis set but not--memory-swap, the container can use swap, which sometimes masks the problem but leads to performance degradation.
Solution 1: Setting Memory Limits When Starting the Container
The most direct way is to explicitly set memory limits for the container. This prevents host memory exhaustion and ensures the container won't be killed until it reaches its limit.
For docker run:
docker run -d \
--name my-app \
--memory=512m \ # hard RAM limit
--memory-swap=1g \ # total limit (RAM + swap). If not set, defaults to --memory.
--memory-reservation=256m \ # soft limit Docker tries to maintain
my-image:latest
For docker-compose.yml:
version: '3.8'
services:
app:
image: my-image:latest
deploy:
resources:
limits:
memory: 512M
memory-swap: 1G
reservations:
memory: 256M
💡 Tip: Start with a limit slightly above the application's normal consumption (check via
docker stats). Do not set the limit equal to all host RAM—leave memory for the system and other processes.
Solution 2: Optimizing the Application Inside the Container
If limits are already set but the container still gets OOM, you need to reduce the application's memory consumption.
For Java applications:
Configure JVM parameters in Dockerfile or the run command:
ENV JAVA_OPTS="-Xmx256m -Xms128m"
CMD java $JAVA_OPTS -jar app.jar
Or in docker-compose.yml:
environment:
- JAVA_OPTS=-Xmx256m
For Python/Node.js:
- Use streaming processing for large files instead of loading them into memory.
- Reduce cache sizes (e.g., in Django
CACHES['default']['OPTIONS']['MAX_ENTRIES']). - Update libraries—memory leaks are sometimes fixed in newer versions.
For web servers (Nginx/Apache):
- Reduce
worker_processesandworker_connections. - Configure buffering.
Solution 3: Configuring OOM Score Adjustment
You can influence a container's priority when the OOM killer selects a victim. The --oom-score-adj parameter (from -1000 to 1000) sets the container's "weight." Lower values reduce the chance of the container being killed.
docker run -d \
--name critical-app \
--oom-score-adj=-500 \
my-critical-image
How to choose a value:
-1000— maximum protection (container killed last, but not guaranteed).0— default value.1000— highest kill priority (not recommended).
⚠️ Important: This does not disable OOM killer; it only changes the order. If memory runs out, some process will still be killed.
Solution 4: Increasing Host Memory or Configuring Swap
If the problem is lack of resources at the host level:
- Increase RAM on the virtual machine/server (e.g., change instance type in AWS).
- Add swap space if none exists or it's too small:
# Check current swap
swapon --show
# Create a 2 GB swap file
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# To enable swap on boot, add to /etc/fstab:
# /swapfile none swap sw 0 0
⚠️ Warning: Swap on SSD can accelerate disk wear. Use only if adding RAM is not possible.
Solution 5: Monitoring and Automated Response
Set up memory monitoring and automated actions:
- Use
docker eventsto track OOM events:docker events --filter 'event=die' --filter 'status=OOMKilled' - Integrate with monitoring systems (Prometheus + cAdvisor, Datadog). Set alerts at 80-90% memory usage.
- Use orchestrators (Kubernetes, Docker Swarm) that can automatically restart containers and scale when resources are low.
Prevention
- Always set memory limits for production containers. Use
--memoryand--memory-swap. - Regularly analyze memory consumption via
docker statsor monitoring. - Test the application under load with limited memory (e.g., using
stress-nginside the container). - Configure health checks in Docker Compose/Kubernetes to quickly detect OOM-related failures.
- Avoid running multiple memory-intensive containers on the same host without proper control.
- Update kernel and Docker—newer versions improve memory management and OOM killer behavior.
FAQ
Can OOM killer be completely disabled for a container?
No, OOM killer is a Linux kernel mechanism. You can only reduce the container's oom_score or increase memory limits to avoid triggering it.
Why does a container with a memory limit still get OOMKilled?
If a limit is set, Docker should stop the container before it hits the limit (with exit code 137). But if no limit is set, the container can use all host RAM, and then the kernel's OOM killer will kill it. Check that the limit is set correctly.
How to diagnose which application inside the container consumes high memory?
Enter the container (docker exec -it <container> bash) and use utilities: top, htop, ps aux --sort=-%mem. For Java applications, use jcmd <pid> VM.native_memory summary.
What if the application cannot run within given memory limits?
Optimize the code, increase the limit (if host resources allow), or reconsider the architecture: split a monolith into microservices, move heavy operations to separate containers with higher limits.
Is it correct to set --memory-swap to 2x --memory?
Not always. If the application does not use swap (e.g., databases), setting --memory-swap can lead to unexpected swap usage and performance drops. For such applications, better disable swap (--memory-swap=-1) or set it equal to --memory.