LinuxMedium

Setting Up Resource Quotas in Kubernetes: Resource Control

In this guide, you'll learn how to create and apply ResourceQuota in Kubernetes to limit CPU and memory usage in a namespace. This prevents a single project from monopolizing cluster resources.

Updated at February 17, 2026
15-25 min
Medium
FixPedia Team
Применимо к:Kubernetes 1.20+kubectl 1.20+

Introduction / Why This Matters

Resource Quota is a Kubernetes mechanism that limits the total consumption of compute resources (CPU, memory) and the number of objects (pods, services, PersistentVolumeClaims) within a single namespace. This is critically important in multitenant environments where multiple teams or projects share a common cluster. Without quotas, one "greedy" project could exhaust all cluster resources, leading to failures in other applications. With ResourceQuota, a cluster administrator can ensure fair distribution and prevent "noisy neighbors".

Requirements / Preparation

  1. Access to a Kubernetes cluster: You must have kubectl configured with permissions to create resources in the target namespace (typically admin or edit roles are required).
  2. An existing namespace: If you don't have a test namespace, create one:
    kubectl create namespace test-quota
    
  3. Understanding of basic concepts: Knowledge of container requests and limits, as well as resources like pods, services, and persistentvolumeclaims.

Step-by-Step Guide

Step 1: Define the Quota Policy

Before writing the manifest, decide which specific resources and in what volume you need to limit. Typical parameters for hard (strict limit) in ResourceQuota:

  • requests.cpu — Sum of requests for CPU across all pods in the namespace.
  • requests.memory — Sum of requests for memory.
  • limits.cpu — Sum of limits for CPU.
  • limits.memory — Sum of limits for memory.
  • count.pods — Maximum number of pods.
  • count.services — Maximum number of services.
  • count.persistentvolumeclaims — Number of PVCs.

Example policy: For the test namespace test-quota, we will set:

  • CPU requests: 2 cores (2)
  • Memory requests: 4 GiB (4Gi)
  • CPU limits: 4 cores (4)
  • Memory limits: 8 GiB (8Gi)
  • Pod count: 10

Step 2: Create the ResourceQuota Manifest

Create a file named quota.yaml with the following content:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources-quota
  namespace: test-quota  # Specify your namespace
spec:
  hard:
    requests.cpu: "2"
    requests.memory: 4Gi
    limits.cpu: "4"
    limits.memory: 8Gi
    count.pods: "10"
    # count.services: "5"  # Uncomment to limit services
    # count.persistentvolumeclaims: "4" # Or PVCs

Important:

  • CPU values can be specified in m (millicores, e.g., 500m = 0.5 core) or as whole numbers.
  • Memory values use suffixes Ki, Mi, Gi (multiples of 1024) or K, M, G (multiples of 1000). Gi/Mi is recommended.
  • count.* fields accept integers.

Step 3: Apply the Quota to the Namespace

Apply the created manifest:

kubectl apply -f quota.yaml

You should see confirmation:

resourcequota/compute-resources-quota created

Step 4: Check Quota Status

To verify the quota is active and view current resource usage, run:

kubectl describe quota compute-resources-quota -n test-quota

The output will show two tables: Hard (set limits) and Used (current consumption). Initially, Used will be 0 / <value> for each resource.

Name:            compute-resources-quota
Namespace:       test-quota
Resource         Used  Hard
--------         ----  ----
limits.cpu       0     4
limits.memory    0     8Gi
requests.cpu     0     2
requests.memory  0     4Gi
count.pods       0     10

Step 5: Test Quota Functionality

Now try to create a resource that will test the quota. Create a simple pod with explicit requests and limits.

  1. Create a file test-pod.yaml:
    apiVersion: v1
    kind: Pod
    metadata:
      name: quota-test-pod
      namespace: test-quota
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        resources:
          requests:
            cpu: "500m"   # 0.5 cores
            memory: "1Gi"
          limits:
            cpu: "1"      # 1 core
            memory: "2Gi"
    

    This pod requests 0.5 CPU and 1Gi memory, which fits within our limits (requests.cpu: 2, requests.memory: 4Gi).
  2. Create the pod:
    kubectl apply -f test-pod.yaml
    

    The pod should be created successfully.
  3. Check the updated quota:
    kubectl describe quota compute-resources-quota -n test-quota
    

    Now the Used column will show values corresponding to the requests of the created pod (0.5 CPU, 1Gi memory).
  4. Try to exceed the quota. Create another pod that requests, for example, 2Gi of memory. After applying the second pod, total requests.memory will become 3Gi. Try to create a third pod with a 2Gi memory request. The operation will fail with an error similar to:
    Error from server (Forbidden): error when creating "pod2.yaml": ... "exceeded quota: compute-resources-quota, requested: requests.memory=2Gi, used: requests.memory=3Gi, limited: requests.memory=4Gi"
    

    This proves the quota is working.

Verification

  1. Primary check: You successfully created a ResourceQuota and observe the Used field growing as you create new resources (pods, PVCs).
  2. Exceedance check: Attempting to create a resource that would exceed any hard limit results in an exceeded quota error.
  3. Status command: kubectl get quota -n <namespace> shows the quota name and its Hard limits. kubectl describe quota provides detailed usage.

Potential Issues

  • exceeded quota error when creating a pod that seems to fit within limits.
    • Cause: ResourceQuota sums the requests (requests) of all pods in the namespace. Ensure that the sum of requests.cpu and requests.memory for all existing pods plus the new pod's requests does not exceed the hard values. Check current usage with kubectl describe quota.
    • Solution: Either increase the values in your ResourceQuota or decrease the requests in the new pod's manifest.
  • Quota does not apply to already running pods.
    • Cause: This is expected behavior. ResourceQuota checks requests for the creation of new objects. Already running pods are not affected.
    • Solution: To include existing pods in the accounting, you must either manually delete and recreate them with appropriate requests, or increase the quota to a level that covers both existing and future requests.
  • Error creating ResourceQuota: unable to recognize "...": no matches for kind "ResourceQuota" in version "v1".
    • Cause: Usually indicates an outdated cluster. In very old Kubernetes versions (pre-1.8), ResourceQuota might have been in extensions/v1beta1.
    • Solution: Update your cluster or use the correct apiVersion (v1 for Kubernetes 1.8+). Check your cluster version: kubectl version --short.
  • Unclear which specific resource is exhausted.
    • Cause: The exceeded quota error message typically includes the quota name and the specific resource that was exceeded (e.g., requests.memory).
    • Solution: Always look for the quota name in the error and inspect its details (kubectl describe quota <name>) to compare Used vs Hard.

F.A.Q.

Does ResourceQuota affect already running pods?
How to check current namespace quotas?
What happens if the quota is exceeded?
Can different quotas be set for different resource types?

Hints

Preparing the Cluster and Namespace
Determining Required Limits
Creating the ResourceQuota YAML Manifest
Applying the Quota to the Namespace
Verifying Application and Testing

Did this article help you solve the problem?

FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community