Linux

Installing Kubernetes on Linux: Deploying a Cluster from Scratch

In this guide, you'll set up a working Kubernetes cluster on Ubuntu or Debian using kubeadm. You'll get a ready-to-run container orchestration environment.

Updated at April 6, 2026
15-30 minutes
Medium
FixPedia Team
Применимо к:Ubuntu 22.04 LTS / 24.04 LTSDebian 12Kubernetes v1.30+

Introduction / Why This Is Needed

Kubernetes has become the de facto standard for container orchestration in production environments. Deploying it on Linux gives you a scalable platform that automatically manages your application lifecycle, balances load, and recovers services from failures. This guide will help you build a stable cluster using the official kubeadm packages, avoiding outdated wrappers and complex scripts.

Requirements / Preparation

Before starting, ensure you have at least one server or virtual machine. Minimum resources: 2 CPU and 2 GB RAM. You will need SSH access with sudo privileges, a stable internet connection for downloading packages, and open ports 6443, 80, 443, 10250–10252. If you plan to scale the cluster, prepare additional nodes with the same specifications.

Step 1: System Preparation and Disabling Swap

Kubernetes requires swap to be disabled for proper resource scheduling. Run the command:

sudo swapoff -a

To make this change persistent after reboot, open /etc/fstab and comment out the line containing swap. Also, ensure the kernel modules for network filters are loaded:

sudo modprobe overlay
sudo modprobe br_netfilter

Apply kernel parameters to allow traffic forwarding between interfaces:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

Step 2: Installing and Configuring the Container Runtime

A container runtime is required for Kubernetes to function. We will use containerd as the most stable option recommended by the community.

sudo apt update
sudo apt install -y containerd

Create the default configuration file:

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

Open the /etc/containerd/config.toml file, find the [plugins."io.containerd.grpc.v1.cri"] section, and set SystemdCgroup = true. Without this, kubelet cannot manage cgroups correctly. Restart and enable the service:

sudo systemctl restart containerd
sudo systemctl enable containerd

Step 3: Adding the Repository and Installing k8s Components

Install utilities for working with HTTPS and add the official Kubernetes GPG key:

sudo apt install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add the repository to your sources list:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install the components, pinning the version to avoid automatic updates that could break the cluster:

sudo apt update
sudo apt install -y kubelet=1.30.0-1.1 kubeadm=1.30.0-1.1 kubectl=1.30.0-1.1
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm bootstraps the cluster, kubelet manages containers on the node, and kubectl is your CLI tool for communicating with the API server.

Step 4: Initializing the Control Plane Node

Run the initialization. The --pod-network-cidr parameter defines the subnet for internal pods, which is required by most plugins:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Wait for completion. The terminal will display a command to copy the admin configuration and a kubeadm join string for adding additional nodes. Save these in a secure location. Configure access for the current user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 5: Deploying a Network Plugin and Verifying Health

Pods will not receive IP addresses without a CNI plugin. We will install Flannel because it requires no additional configuration:

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Wait 1–2 minutes. Check node status:

kubectl get nodes

All nodes should transition to the Ready state.

Verifying the Result

Deploy a test pod to confirm the scheduler and network are working:

kubectl run nginx-test --image=nginx:latest

Check the status and internal IP:

kubectl get pods -o wide

If you see a Running status and an assigned address from the 10.244.x.x subnet, the cluster is fully ready for your services. Delete the test object: kubectl delete pod nginx-test.

Potential Issues

  • Error: failed to run Kubelet: failed to create kubelet: failed to get container info for "/system.slice/kubelet.service": ensure containerd is running and the SystemdCgroup parameter is set to true.
  • Node remains in NotReady or NetworkUnavailable status: verify the kubectl apply -f ... command for the CNI plugin succeeded. This error often occurs if ports 8472 and 4789 are blocked by a firewall.
  • kubeadm init fails at the kubelet-start stage: check that swap is actually disabled and that your containerd version is compatible with Kubernetes v1.30. The output of journalctl -u kubelet -f will show the exact cause of the failure.

F.A.Q.

Can I install Kubernetes on a home PC?
Is it necessary to disable swap for k8s to work?
Which network plugin (CNI) should a beginner choose?
Are root privileges required to install the components?

Hints

System Preparation and Disabling Swap
Installing the Container Runtime Component
Installing kubeadm, kubelet, and kubectl
Initializing the Control Plane Node
Connecting the Network Plugin and Verification

Did this article help you solve the problem?

FixPedia

Free encyclopedia for fixing errors. Step-by-step guides for Windows, Linux, macOS and more.

© 2026 FixPedia. All materials are available for free.

Made with for the community