Introduction / Why This Is Needed
Control Groups (cgroups) are a Linux kernel mechanism that allows isolating and limiting the use of resources (CPU, memory, disk I/O, network) for groups of processes. Systemd, which serves as the init system and service manager in most modern distributions, provides a convenient and declarative interface for working with cgroups.
This guide will show you how to use systemd's built-in capabilities to manage resources without needing to manually work with cgcreate or edit files in /sys/fs/cgroup. You will be able to:
- Limit the amount of RAM and swap space for a service.
- Set a CPU usage limit (as a percentage or in cores).
- Configure disk I/O priority.
- Create logical process groups (slices) for unified resource management.
After completing this guide, you will have full control over the resource consumption of your services, which is especially important for hosting, containerization, and the stable operation of multi-user systems.
Prerequisites / Preparation
Before you begin, ensure that:
- You have root privileges or a user with
sudoaccess. - systemd version 235 or newer is installed. Check with:
systemctl --version. - The Linux kernel supports cgroups v2 (recommended) or v1. Check the hierarchy type:
mount | grep cgroup.- Ideal case:
cgroup2is mounted at/sys/fs/cgroup. - If older controllers (
memory,cpu, etc.) are mounted separately, systemd can still manage them via a unified hierarchy.
- Ideal case:
- You know the name of the systemd service for which you want to set limits (e.g.,
nginx.service,docker.service). List active services:systemctl list-units --type=service --state=running.
Step 1: Check systemd Version and cgroups v2 Support
First, let's verify system readiness. Run in the terminal:
# Check systemd version (requires 235+)
systemctl --version | head -n1
# Check that systemd is using cgroups v2 (preferred)
# If "cgroup2" appears in the output, the unified hierarchy is in use.
mount | grep -E 'cgroup|cgroup2'
# Example expected output for cgroups v2:
# cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
# If you see separate mounts for memory, cpu, blkio, etc., that's cgroups v1.
# Systemd works with it too, but the syntax for some directives may differ slightly.
💡 Tip: On most distributions released after 2020 (Ubuntu 20.04+, Fedora 31+, Debian 11+), cgroups v2 is used by default.
Step 2: Create a Custom Slice (Optional but Recommended)
A Slice is essentially a folder in the cgroup hierarchy. All processes launched within a specific slice will inherit its resource limits. This is convenient for grouping related services (e.g., all services of a web application).
- Create a configuration file for the new slice. Let's call it
myapp.slice(replacemyappwith a meaningful name):sudo nano /etc/systemd/system/myapp.slice - Add basic content. We'll give the slice a description and specify that it should reside within the
system.slice(standard practice):[Slice] # Slice description Description=Slice for grouping my application's services # Sets limits for ALL units inside this slice by default. # These values can be overridden in a specific service's config. MemoryMax=2G CPUQuota=50% - Save the file (
Ctrl+O,Enter,Ctrl+X). - Reload the systemd configuration:
sudo systemctl daemon-reload - Start the slice (although it has no
activestate, it needs to be "loaded" into the manager):sudo systemctl start myapp.slice
Now any service you add to this slice (via Slice=myapp.slice in its config) will automatically receive the limits specified above, unless it overrides them itself.
Step 3: Configure Resource Limits for a Specific Unit (Service)
Let's add limits to your service's configuration file. Suppose it's myapp.service.
- Create or edit the unit file:
sudo nano /etc/systemd/system/myapp.service - In the
[Service]section, add the necessary directives. For a full list, seeman systemd.resource-control. Example configuration:[Unit] Description=My Important Application # Specifies that the service should run inside our custom slice # (if you created one in Step 2). If not, skip this line. Slice=myapp.slice [Service] Type=simple ExecStart=/usr/local/bin/myapp # --- Resource limits (override or supplement the slice) --- # RAM limit (including cache). Exceeding triggers OOM for the process. MemoryMax=1G # Swap usage limit. Set to 0 to disable swap for this service. MemorySwapMax=512M # CPU usage limit. 50% = half of one core. CPUQuota=50% # Additional: set CPU priority (1-10000, higher = more priority) CPUWeight=500 # Disk I/O limit (in IOPS). 1000 by default. IOWeight=300 # Limit on read/write operations per second (requires cgroups v2). # IOReadBandwidthMax=/dev/sda 10M # IOWriteBandwidthMax=/dev/sda 5M [Install] WantedBy=multi-user.target - Save the file.
- Critical step: Apply the configuration changes:
sudo systemctl daemon-reload - Restart the service for the new limits to take effect:
sudo systemctl restart myapp.service
Step 4: Manage Properties at Runtime (Temporary Changes)
Sometimes you need to quickly limit an already running process without editing files. Use systemctl set-property.
- For permanent change (persists after service restart):
sudo systemctl set-property myapp.service MemoryMax=500M sudo systemctl restart myapp.service # Restart is required! - For temporary, "raw" (runtime) change (lasts until systemd or the service restarts):
sudo systemctl set-property --runtime myapp.service CPUQuota=80%
This is handy for testing.
Step 5: Verify the Result
After configuring and restarting the service, check that the limits are applied.
- View service status and its cgroup properties:
systemctl status myapp.service
In the output, look for lines starting withMemory,CPU,Tasks. They will show current limits and consumption. - Use
systemd-cgtop(liketopfor cgroups):# Update every 2 seconds systemd-cgtop -n 2
You'll see the hierarchy of all systemd cgroups and their resource consumption (MEM, CPU, IO). Your service should appear under thesystem.slicebranch or your custommyapp.slice. - Directly view files in the cgroup virtual filesystem:
# Navigate to your service's or slice's directory cd /sys/fs/cgroup/$(systemctl show -p ControlGroup myapp.service | cut -d= -f2) # Check the set limits (values in bytes or percentages) cat memory.max cat cpu.max
For cgroups v1, the paths and filenames will be different (e.g.,memory.limit_in_bytes). - Check from inside the container/process: If the application itself can report statistics (e.g., via
/proc/self/status), ensureVmRSSdoes not exceedMemoryMax.
Common Issues
Issue: Service fails to start or crashes immediately after daemon-reload
Cause: Syntax error in the .service or .slice configuration file.
Solution: Check the INI file's correctness. Use sudo systemctl status myapp.service — the output will indicate the parsing error line. You can also verify syntax with sudo systemd-analyze verify /etc/systemd/system/myapp.service.
Issue: MemoryMax limits are ignored, process uses more memory
Cause 1: MemorySwapMax is enabled, and the system actively uses swap. MemoryMax only limits RAM; the total limit (RAM+Swap) is set by MemorySwapMax.
Solution: Set MemorySwapMax=0 to disable swap for this service, or increase both limits.
Cause 2: The process was launched manually from a shell, not via systemd, while you're configuring a unit.
Solution: Ensure the process is a child of the systemd unit. Use pstree -p | grep myapp or systemctl status myapp.service — it should list the process PIDs.
Issue: CPUQuota=50% does not limit the process to 50% of one core
Cause: On a multi-processor system (e.g., 8 cores), 50% of all cores equals 4 cores. If you want to limit the process to half of one core, you need to specify CPUQuota=50% and ensure AllowedCPUs is not set. If you need exactly "half a core", that's 100% of one core. Percentages are calculated from the total power of all available cores.
Solution: Recalculate the needed value. To strictly limit to one core, use CPUQuota=100% and AllowedCPUs=0-0 (if only core 0 is needed).
Issue: Error "Failed to set property: Permission denied" with set-property
Cause: You're trying to change a property that can only be set in a configuration file (some properties are "immutable" after service start), or you lack sufficient privileges.
Solution: Use sudo. If the property truly cannot be changed "on the fly", make the change in the unit file and run daemon-reload + restart.
Advanced Example: Creating a Slice to Group Containers
Suppose you run several Docker containers but want to prevent them from consuming all server resources. Docker uses its own cgroup driver by default, but if you configured Docker to use systemd (via --exec-opt native.cgroupdriver=systemd), each container becomes a child of system.slice or docker.slice. You can create a separate slice for a container group:
- Create
/etc/systemd/system/containers.slice:[Slice] Description=Slice for isolating containers # Limit the entire container group to 4 cores and 8 GB RAM CPUQuota=400% MemoryMax=8G - Reload systemd:
sudo systemctl daemon-reload. - When starting a container via
docker run, specify which cgroup to place it in (supported by Docker with the systemd driver):docker run -d --name my_container --cgroup-parent=containers.slice nginx:alpine
Now all processes in that container will be undercontainers.slice, and their total consumption will not exceed the defined limits.
Conclusion (Do Not Add as a Heading, Simply End the Text)
You have mastered the key mechanisms for resource management via systemd and cgroups. This approach is integrated into the OS and requires no additional software. Remember that overly strict limits can cause services to fail (OOM killer, CPU throttling), so test configurations in a staging environment. For more complex scenarios (e.g., network limits), explore additional directives in man systemd.resource-control and the kernel documentation on cgroups v2.