-
k8s build (version 1.28.2)
-
1. Installation of containerd
- 1.1 Downloading the tarball
- 1.2 Preparation of service module documentation
- 2. Installation of runc
-
3. Installation of the cni plug-in
- 3.1 Downloading documents
- 3.2 Setting up a crictl runtime endpoint
- 4. Configure containerd
-
5. Host configuration
- 5.1 Editing the hosts file (optional)
- 5.2 Enabling Traffic Forwarding
- 5.3 Disabling the firewall and selinux
- 5.4 Shutting down swap
-
6. Build k8s
- 6.1 Configuring the yum source
- 6.2 Installation tools
- 6.3 Initialization
-
7. Web plug-ins
- 7.1 Installing calico
- 7.2 Configuring the Mirror Accelerator Address
-
1. Installation of containerd
k8s build (version 1.28.2)
I do not know since when, openEuler has begun to support the use of containerd k8s clusters, before I learned the highest can only support up to 1.23, so here to write another article on the deployment of the runtime for containerd clusters
Why write separately about how openEuler is deployed?
Because there are some differences when deploying on openEuler using the centos deployment method, and those differences make it impossible to proceed further, I'm writing a separate blog to avoid these pitfalls
1. Installation of containerd
You may be asking, what's not to know about installing a containerd, can't you just use yum to install it? Yes, you can do that on other operating systems, but you won't get an error if you do it on openEuler because the yum repository does have an rpm package for containerd, and you can indeed install it, but that containerd version is too low. It won't work. So you need to download the tarball to install it
1.1 Downloading the tarball
# Make sure you're not using the official repository'scontainerd
[root@master ~]# yum remove containerd -y
[root@master ~]# wget /containerd/containerd/releases/download/v1.7.16/containerd-1.7.
[root@master ~]# tar -zxvf containerd-1.7.
[root@master ~]# mv bin/* /usr/local/bin/
1.2 Preparation of service module documentation
[root@master ~]# vim /usr/lib/systemd/system/
[Unit]
Description=containerd container runtime
Documentation=
After=
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=
Then design a bootloader for containerd.
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl enable --now containerd
2. Installation of runc
This one is the same, you can't use the yum installed version (at least not in front of you - article written 2024-11-9)
[root@master ~]# yum remove runc -y
[root@master ~]# wget /opencontainers/runc/releases/download/v1.1.12/runc.amd64
[root@master ~]# install -m 755 runc.amd64 /usr/local/sbin/runc
3. Installation of the cni plug-in
3.1 Downloading documents
[root@master ~]# wget /containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.
[root@master ~]# mkdir -p /opt/cni/bin
[root@master ~]# tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.
3.2 Setting up a crictl runtime endpoint
cat <<EOF > /etc/
runtime-endpoint: unix:///run/containerd/
image-endpoint: unix:///run/containerd/
timeout: 5
debug: false
EOF
4. Configure containerd
[root@master ~]# containerd config default > /etc/containerd/
# commander-in-chief (military)cgroupshow (a ticket)
[root@master ~]# vim /etc/containerd/config/toml
# Find this line of configuration,commander-in-chief (military)falsechange intotrue
SystemdCgroup = true
# modificationssandboxmirror (computing)
sandbox_image = "/google_containers/pause:3.9"
Restart containerd
[root@master ~]# systemctl restart containerd
5. Host configuration
5.1 Editing the hosts file (optional)
Writing IPs and hostnames to the /etc/hosts file is something I won't do here. Not doing it has no effect
5.2 Enabling Traffic Forwarding
[root@master ~]# modprobe bridge
[root@master ~]# modprobe br_netfilter
[root@master ~]# vim /etc/
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@master ~]# sysctl -p
5.3 Disabling the firewall and selinux
[root@master ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
[root@master ~]# systemctl disable --now firewalld
5.4 Shutting down swap
If swap is configured, turn it off.
[root@master ~]# swapoff -a
Then go to the/etc/fstab
Inside, comment out the swap line
6. Build k8s
This is where you start building k8s
6.1 Configuring the yum source
[root@master ~]# cat <<EOF > /etc//
[kubernetes]
name=Kubernetes
baseurl=/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=/kubernetes/yum/doc/ /kubernetes/yum/doc/
EOF
6.2 Installation tools
[root@master ~]# yum install kubectl kubeadm kubelet -y
[root@master ~]# systemctl enable kubelet
6.3 Initialization
[root@master ~]# kubeadm init --kubernetes-version=v1.28.2 --pod-network-cidr=10.244.0.0/16 --image-repository=/google_containers
- Here, change the value after kubernetes-version to the version of your own kubeadm.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/ $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.200.200:6443 --token alefrt.vuiz4k7424ljhh2i \
--discovery-token-ca-cert-hash sha256:1c0943c98d9aeaba843bd683d60ab66a3b025d65726932fa19995f067d62d436
Seeing this message means that the initialization was successful, and then we follow the prompts to create the directory
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/ $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
If there are other nodes that need to join the cluster then execute the
[root@master ~]# kubeadm join 192.168.200.200:6443 --token alefrt.vuiz4k7424ljhh2i \
--discovery-token-ca-cert-hash sha256:1c0943c98d9aeaba843bd683d60ab66a3b025d65726932fa19995f067d62d436
Then we can check the node status
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 26s v1.28.2
Next we install the network plugin calico and change its status to Ready
7. Web plug-ins
7.1 Installing calico
[root@master ~]# wget /projectcalico/calico/v3.27.0/manifests/
[root@master ~]# kubectl create -f
Let's move on to the second file.
[root@master ~]# wget /projectcalico/calico/v3.27.0/manifests/
[root@master ~]# vim
# Change the cidr in there to the address segment used for initializing the cluster
cidr: 10.244.0.0/16
7.2 Configuring the Mirror Accelerator Address
If you don't configure the mirror gas pedal address. The mirror is not pulled.
[root@master ~]# vim /etc/containerd/
# Need to find this line and add 2 lines
[plugins."." .]
[plugins.".". ."""]
endpoint = ["mirror gas pedal address 1", "mirror gas pedal address 2"]
You can Baidu search which mirror gas pedal address is still available, and then replace the text inside the
Restart containerd
[root@master ~]# systemctl restart containerd
And then when he's done pulling all the mirrors the cluster will be fine.
That's how it ended up.
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 15m v1.28.2