K8S Installation Procedure
I. Preparatory work
1. Prepare three hosts (one Master node, two Node nodes) as follows:
character | IP | random access memory (RAM) | crux | (computer) disk |
---|---|---|---|---|
Master | 192.168.116.131 | 4G | 4 | 55G |
Node01 | 192.168.116.132 | 4G | 4 | 55G |
Node02 | 192.168.116.133 | 4G | 4 | 55G |
2. Close SElinux, because SElinux will affect some components of K8S can not work properly:
sed -i '1,$s/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# reboot
3. Configure the hostname for each of the three hosts as follows:
Control node Master:
hostnamectl set-hostname master && bash
Worker node Node01:
hostnamectl set-hostname node01 && bash
Worker node Node02:
hostnamectl set-hostname node02 && bash
4. Configure host files for each of the three hosts:
-
Go to the hosts file:
cd /etc/hosts
-
Modify the contents of the file to add three hosts as well as the IP:
127.0.0.1 localhost localhost4 localhost4.localdomain4 ::1 localhost localhost6 localhost6.localdomain6 192.168.116.131 master 192.168.116.132 node01 192.168.116.133 node02
-
After modification, you can check whether the three hosts are connected by using the ping command:
ping -c1 -W1 master ping -c1 -W1 node01 ping -c1 -W1 node02
5. The three hosts download the required unexpected component packages and related dependency packages respectively:
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip autoconf automake zlib-devel epel-release openssh-server libaio-devel vim ncurses-devel socat conntrack telnet ipvsadm
The required related unexpected component packages are explained below:
yum-utils: A number of auxiliary tools are provided foryum
Package managers such asyum-config-manager
,repoquery
etc.
device-mapper-persistent-data: Relates to Linux's device mapping capabilities, often associated with LVM (Logical Volume Management) and container storage (e.g. Docker).
lvm2: Logical Volume Manager for managing logical volumes on disks, allowing flexible disk partition management.
wget: A non-interactive web downloader that supports HTTP, HTTPS and FTP protocols and is often used to download files.
net-tools: Provides some classic web tools such asifconfig
,netstat
etc., for viewing and managing network configurations.
nfs-utils: A toolkit that supports NFS (Network File System) and allows clients to mount remote file systems.
lrzsz:lrz
cap (a poem)lsz
It is a command line tool for X/ZMODEM file transfer protocol under Linux system, which is commonly used to transfer data through serial port.
gcc: A GNU C compiler for compiling C programs.
gcc-c++: A GNU C++ compiler for compiling C++ language programs.
make: Used to build and compile programs, usually in conjunction with theMakefile
Used in conjunction to control the compilation and packaging process of a program.
cmake: A cross-platform build system generation tool for managing the compilation process of a project, especially for large and complex projects.
libxml2-devel: Developmentlibxml2
library header file.libxml2
is a C library for parsing XML files.
openssl-devel: Header files and development libraries for development of the OpenSSL library, the library for SSL/TLS encryption.
curl: A command line tool for transferring data, supporting multiple protocols (HTTP, FTP, etc.).
curl-devel: Developmentcurl
Libraries and header files to support use in codecurl
Related Functions.
unzip: for decompression.zip
Documentation.
autoconf: A tool for automatically generating configuration scripts, often used to generate packageconfigure
Documentation.
automake: Automatically generated Documentation, combined with
autoconf
Use, for building systems.
zlib-devel:zlib
library development header files.zlib
is a library for data compression.
epel-release: Used to enable the EPEL (Extra Packages for Enterprise Linux) repository, which provides a large number of additional packages.
openssh-server: OpenSSH server for remote login and management of the system via SSH.
libaio-devel: Development header file for an asynchronous I/O library that provides asynchronous file I/O support, commonly used in database and high performance applications.
vim: A powerful text editor with multiple language support and extended functionality.
ncurses-devel: Developmentncurses
library that provides tools for building terminal controls and user interfaces.
socat: A versatile network tool for bi-directional data transfer that supports multiple protocols and address types.
conntrack: Connection tracking tool that displays and manipulates the connection tracking table in the kernel, commonly used for network firewall and NAT configuration.
telnet: A simple network protocol for remote login that allows communication with a remote host via the command line.
ipvsadm: Used to manage IPVS (IP Virtual Server), a load balancing module in the Linux kernel commonly used for high availability load balancing clusters.
6. Configure password-free login between hosts
Master node:
1) Configure Master host to two other Node hosts for password-free login
ssh-keygen # Press enter without typing anything when you encounter a problem.
2) Pass the public key file just generated to the two Node nodes, enter yes and then the corresponding password of the hosts in the
ssh-copy-id master
ssh-copy-id node01
ssh-copy-id node02
Node01 node:
1) Configure Node01 host to two other hosts for password-free login
ssh-keygen # Press enter without typing anything when you encounter a problem.
2) Pass the public key file just generated to the two Node nodes, enter yes and then the corresponding password of the hosts in the
ssh-copy-id master
ssh-copy-id node01
ssh-copy-id node02
Node02 node:
1) Configure Node01 host to two other hosts for password-free login
ssh-keygen # Press enter without typing anything when you encounter a problem.
2) Pass the public key file just generated to the two Node nodes, enter yes and then the corresponding password of the hosts in the
ssh-copy-id master
ssh-copy-id node01
ssh-copy-id node02
7. Turn off the firewall on all hosts.
If you do not want to turn off the firewall can add firewall-cmd rules for filtering and screening, related content query information, not a demonstration.
Turn off the firewall:
systemctl stop firewalld && systemctl disable firewalld
systemctl status firewalld # Query Firewall Status,When closed, it should read Active: inactive (dead)
Adding Firewall Rules:
6443: Kubernetes Api Server 2379, 2380: Etcd database
10250, 10255: kubelet service 10257: kube-controller-manager service
10259: kube-scheduler service 30000-32767: NodePort port mapped on physical machine
179, 473, 4789, 9099: Calico services 9090, 3000: Prometheus monitoring + Grafana panel
8443: Kubernetes Dashboard Control Panel
# Kubernetes API Server
firewall-cmd --zone=public --add-port=6443/tcp --permanent
# Etcd comprehensive database
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
# Kubelet service
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10255/tcp --permanent
# Kube-Controller-Manager service
firewall-cmd --zone=public --add-port=10257/tcp --permanent
# Kube-Scheduler service
firewall-cmd --zone=public --add-port=10259/tcp --permanent
# NodePort mapping port
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
# Calico service
firewall-cmd --zone=public --add-port=179/tcp --permanent # BGP
firewall-cmd --zone=public --add-port=473/tcp --permanent # IP-in-IP
firewall-cmd --zone=public --add-port=4789/udp --permanent # VXLAN
firewall-cmd --zone=public --add-port=9099/tcp --permanent # Calico service
#Prometheuscontrol+Grafanakneading board
firewall-cmd --zone=public --add-port=9090/tcp --permanent
firewall-cmd --zone=public --add-port=3000/tcp --permanent
# Kubernetes Dashboard控制kneading board
firewall-cmd --zone=public --add-port=8443/tcp --permanent
# Reload the firewall configuration to apply the changes
firewall-cmd --reload
8. Three hosts with swap swap partition turned off
Swap partitions are much slower to read and write than physical memory. If Kubernetes workloads rely on swap to compensate for out-of-memory, this can lead to significant performance degradation, especially in resource-intensive container applications.Kubernetes prefers to expose nodes to out-of-memory situations directly rather than relying on swap, which prompts the scheduler to reallocate resources.
By default, Kubernetes will add thekubelet
Check at startupswap
status and ask for it to be turned off. If theswap
is not turned off, Kubernetes may not start properly and report errors. Example:
[!WARNING]
kubelet: Swap is enabled; production deployments should disable swap.
In order for Kubernetes to work properly, it is recommended to permanently disable swap on all nodes, as well as adjust the system's memory management:
swapoff -a # Shut down the current swap
sed -i '/swap/s/^/#/' /etc/fstab # add comment before swap
grep swap /etc/fstab # A successful shutdown would look like this: #/dev/mapper/rl-swap none swap defaults 0 0
9. Modify kernel parameters
Each of the three hosts executes.
modprobe br_netfilter # load Linux kernel modules
-
modprobe
: Commands for loading or unloading kernel modules. -
br_netfilter
: This module allows bridged network traffic to be filtered by iptables rules and is typically used when network bridging is enabled. -
This module is primarily used in Kubernetes container networking environments to ensure that the Linux kernel properly handles the filtering and forwarding of network traffic, especially in inter-container communication.
Each of the three hosts executes.
cat > /etc// <<EOF
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc// # Putting the configuration into effect
- -nf-call-ip6tables = 1: Allow IPv6 network traffic to be used when bridging through Linux networks
ip6tables
Perform filtering. - -nf-call-iptables = 1: Allow IPv4 network traffic to be used when bridging over a Linux network
iptables
Perform filtering. - net.ipv4.ip_forward = 1: Allows the Linux kernel to forward (route) IPv4 packets.
These settings ensure that in Kubernetes, network bridging traffic can be routed through theiptables
cap (a poem)ip6tables
filtering, and enable IPv4 packet forwarding to improve network security and communication capabilities.
10. Configure the yum sources for installing Docker and Containerd
Three hosts to install docker-ce source respectively (any one, only one), the subsequent operation only demonstrates the Ali source.
# Ali source
yum-config-manager --add-repo /docker-ce/linux/centos/
# Tsinghua University open source software mirrors
yum-config-manager --add-repo /docker-ce/linux/centos/
# University of Science and Technology of China open source mirrors
yum-config-manager --add-repo /docker-ce/linux/centos/ # USTC open source mirrors
# CSU mirror repositories
yum-config-manager --add-repo /docker-ce/linux/centos/
# Huawei cloud sources
yum-config-manager --add-repo /docker-ce/linux/centos/ # Huawei cloud source
11. Configure the yum sources needed for the K8S command line tools
cat > /etc// <<EOF
[kubernetes]
name=Kubernetes
baseurl=/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=/kubernetes/yum/doc/
/kubernetes/yum/doc/
EOF
yum makecache
12. Three hosts for time synchronization
Both Chrony and NTPD are tools used for time synchronization, but Chrony has its own unique advantages in many ways. Below are some of the main advantages of Chrony over NTPD, and based on these, a deployment of chrony time synchronization:
vantage | Chrony | NTPD |
---|---|---|
fast synchronization | Chrony can synchronize time faster when network latency is high or the connection is unstable. | It usually takes longer to achieve time synchronization. |
adaptable | Performs well on mobile devices or in virtual environments and adapts quickly to network changes. | Poor performance in these environments. |
Clock drift correction | The ability to better handle system clock drift is achieved through frequency tuning. | Weak handling of system clock drift. |
Simple configuration | The configuration is relatively simple and intuitive, easy to understand and use. | There are more configuration options and it may take more time to familiarize yourself with them. |
1) Three mainframe installations of Chrony
yum -y install chrony
2) Three hosts modify the configuration file to add a domestic NTP server
echo "server iburst" >> /etc/
echo "server iburst" >> /etc/
echo "server iburst" >> /etc/
echo "server iburst" >> /etc/
tail -n 4 /etc/
systemctl restart chronyd
3) You can set up a timed task to restart the chrony service every minute for time calibration (not required)
echo "* * * * * /usr/bin/systemctl restart chronyd" | tee -a /var/spool/cron/root
It is recommended to do this manually by first executing thecrontab -e
command, after adding the following to the timed task
* * * * * /usr/bin/systemctl restart chronyd
- The five asterisks indicate time scheduling, with each asterisk representing a time field, from left to right:
- First asterisk: minutes (0-59)
- Second asterisk: hours (0-23)
- Third asterisk: Date (1-31)
- Fourth asterisk: month (1-12)
- Fifth asterisk: day of the week (0-7, with 0 and 7 both representing Sundays)
- Here, each field is represented by the
*
means "every", thus* * * * *
It means "every second of every minute". -
/usr/bin/systemctl
besystemctl
The full path to the command to manage the system services.
13. Install Containerd
Containerd is a high-performance container runtime that takes care of container lifecycle management in Kubernetes, including creating, running, stopping, and deleting containers, as well as pulling and managing images from image repositories.Containerd provides a Container Runtime Interface (CRI) that seamlessly integrates with Kubernetes to ensure efficient resource utilization and fast container startup times. Containerd provides a Container Runtime Interface (CRI) that seamlessly integrates with Kubernetes to ensure efficient resource utilization and fast container startup times. Containerd also supports event monitoring and logging for easy operation and debugging, and is a key component for container orchestration and management.
Three hosts installing containerd version 1.6.22
yum -y install -1.6.22
yum -y install -1.6.22 --allowerasing # Choose this if you have problems installing, use the first one by default.
Create the containerd's configuration file directory and modify the self-contained。
mkdir -pv /etc/containerd
vim /etc/containerd/
The modifications are as follows:
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."."]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."."]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "/google_containers/pause:3.9"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins.".".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins.".".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins.".".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins.".".containerd.default_runtime.options]
[plugins.".".]
[plugins.".".]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ".v2"
[plugins.".".]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true
[plugins.".".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins.".".containerd.untrusted_workload_runtime.options]
[plugins.".".image_decryption]
key_model = "node"
[plugins.".".registry]
config_path = ""
[plugins.".".]
[plugins.".".]
[plugins.".".]
[plugins.".".]
[plugins.".".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."."]
path = "/opt/containerd"
[plugins."."]
interval = "10s"
[plugins."."]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."."]
content_sharing_policy = "shared"
[plugins."."]
no_prometheus = false
[plugins."."]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."."]
platforms = ["linux/amd64"]
sched_core = false
[plugins.".-service"]
default = ["walking"]
[plugins.".-service"]
rdt_config_file = ""
[plugins."."]
root_path = ""
[plugins."."]
root_path = ""
[plugins."."]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."."]
root_path = ""
[plugins."."]
root_path = ""
upperdir_label = false
[plugins."."]
root_path = ""
[plugins."."]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."."]
accepts = ["application/.+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/."
[stream_processors."."]
accepts = ["application/.+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/.+gzip"
[timeouts]
"" = "0s"
"" = "5s"
"" = "5s"
"" = "3s"
"" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
sandbox mirror source: Sets up sandbox container images used by Kubernetes to support efficient management of containers.
- sandbox_image = "/google_containers/pause:3.9"
hugeTLB Controller: Disables the hugeTLB controller, reducing memory management complexity for environments that don't need it.
- disable_hugetlb_controller = true
Web Plugin Path: Specify the binary and configuration path for the CNI network plug-in to ensure proper network functionality.
- bin_dir = "/opt/cni/bin"
- conf_dir = "/etc/cni/"
Garbage Collection Scheduler: Adjust garbage collection thresholds and startup latency to optimize container resource management and performance.
- pause_threshold = 0.02
- startup_delay = "100ms"
streaming media server: Configure the address and port of the streaming service to enable efficient data transmission with the client.
- stream_server_address = "127.0.0.1"
- stream_server_port = "0"
Starting and setting up containerd to boot up on its own
systemctl enable containerd --now
systemctl status containerd
14. Install Docker-ce (use docker's pull image feature)
1) Install the latest version of docker-ce:
yum -y install docker-ce
2) Start and set up docker to boot itself:
systemctl start docker && systemctl enable
3) Configure docker's image gas pedal address:
Note: Ali accelerated address loginAliCloud AcceleratorCheck it out, everyone's accelerated address is different
tee /etc/docker/ <<-'EOF'
{
"registry-mirrors": [
"",
"",
"",
"https://dockerhub.",
"."
]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
Second, K8S installation and deployment
1.Install K8S related core components
Each of the three hosts installs K8S-related core components:
yum -y install kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl enable kubelet
-
kubelet
It is the core agent on each node in a Kubernetes cluster, and is responsible for managing and maintaining the lifecycle of Pods and containers on the node according to the instructions of the control plane, ensuring that containers are running according to specification, and communicating with the control plane on a regular basis. kubelet reports the state of the nodes and Pods to the control node's apiServer, which stores this information in the etcd database. kubelet reports the status of nodes and pods to the control node's apiServer, which stores this information in the etcd database. -
kubeadm
is a tool for simplifying the installation and management of Kubernetes clusters, quickly initializing control plane nodes and adding worker nodes to the cluster, reducing the complexity of manual configuration. -
kubectl
is a command line tool for Kubernetes that is used for administrators to interact with the cluster and perform various tasks such as deploying applications, viewing resources, troubleshooting issues, managing cluster status, etc. It communicates directly with the Kubernetes API through the command line.
2. Initialize the cluster
1)Master nodeInitialize the K8S cluster using kubeadm:
Notes:kubeadm installs K8S, and the components of the control and worker nodes are based on thePodRunning.
kubeadm config print init-defaults >
- Generate a default configuration file redirection output to the center of the
2) Modify the file you just generated with kubeadm:
sed -i '1,$s/advertiseAddress: 1.2.3.4/advertiseAddress: 192.168.116.131/g'
sed -i "s|criSocket:.*|criSocket: unix://$(find / -name | head -n 1)|"
sed -i '1,$s/name: node/name: master/g'
sed -i 's|imageRepository: registry.|imageRepository: /google_containers|' # Originally configured as a foreignk8sroot,To speed up the download of the image,需改成国内root
sed -i '/serviceSubnet/a\ podSubnet: 10.244.0.0/12' # /a\ expressed inserviceSubnetContents of the line below the line
cat <<EOF >>
---
apiVersion: ./v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: ./v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
more # Check it manually.
-
advertiseAddress
is the advertised address of the Kubernetes control node through which other nodes communicate with the control plane node. It is usually the IP address of the server on which the control node resides, and to ensure that the control plane node can communicate with other nodes in the network through the correctControl Node IP Address(My MasterIP is: 192.168.116.131) to communicate. -
criSocket
Specified is the address of the container runtime (CRI) socket used by Kubernetes, which K8S uses to communicate with container runtimes such as containerd to manage and start containers. To ensure that K8S uses the correct container runtime socket. Finding the file path and replacing it in the configuration file with the find command ensures that the path is accurate and avoids manual search and configuration errors. -
IPVS
Mode supports more load balancing algorithms and better performance, especially in the case of more cluster nodes and services, which can significantly improve network forwarding efficiency and stability (if you do not specify mode as ipvs, iptables is selected by default, and iptables performance is relatively poor). -
uniform use
systemd
as containers and system servicescgroup
driver, avoid using thecgroupfs
This improves compatibility and stability between Kubernetes and the host system.Notes:Host IP, Pod IP, and Service IPCannot be on the same network segmentThis can lead to IP conflicts, routing confusion, and network isolation failures, affecting normal communication and network security in Kubernetes.
3) Initialize K8S based on the file and pull the required images for Kubernetes 1.28.0 for each of the three hosts (you can choose either of the two methods):
(1) Use use usekubeadm
command to quickly pull images of all the core components of Kubernetes and ensure that the versions are consistent.
kubeadm config images pull --image-repository="/google_containers" --kubernetes-version=v1.28.0
(2) Usectr
command, requiring finer-grained control, or in thekubeadm
When problems occur during the pulling of the mirror, you canutilizationctr
The command pulls manuallyMirroring.
ctr -n= images pull /google_containers/kube-apiserver:v1.28.0
ctr -n= images pull /google_containers/kube-controller-manager:v1.28.0
ctr -n= images pull /google_containers/kube-scheduler:v1.28.0
ctr -n= images pull /google_containers/kube-proxy:v1.28.0
ctr -n= images pull /google_containers/pause:3.9
ctr -n= images pull /google_containers/etcd:3.5.9-0
ctr -n= images pull /google_containers/coredns:v1.10.1
4) At the Master control node, initialize the Kubernetes master node
kubeadm init --config= --ignore-preflight-errors=SystemVerification
Individual operating systems may have kubelet startup failures, as prompted below, if prompted successfully then ignore the following steps:
[!WARNING]
dial tcp [::1]:10248: connect: connection refused
fulfillmentsystemctl status kubelet
The following error message was found:
[!WARNING]
Process: 2226953 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 2226953 (code=exited, status=1/FAILURE)
The solution is as follows.control nodeImplementation:
sed -i 's|ExecStart=/usr/bin/kubelet|ExecStart=/usr/bin/kubelet --container-runtime-endpoint=unix://$(find / -name | head -n 1) --kubeconfig=/etc/kubernetes/ --config=/var/lib/kubelet/|' /usr/lib/systemd/system/
systemctl daemon-reload
systemctl restart kubelet
kubeadm reset # Remove the installation errorK8S
kubeadm init --config= --ignore-preflight-errors=SystemVerification # reinstallation
3. Set up the Kubernetes configuration file so that the current user can use thekubectl
command to interact with the Kubernetes cluster
Control node execution:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/ $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. Worker nodes added to the K8S cluster
1) Before adding a work node, thecontrol nodeExecute the following command:
kubeadm token create --print-join-command
Successful execution results in the following prompt (token):
[!IMPORTANT]
kubeadm join 192.168.116.131:6443 --token xxiuik.9axtcp5xk3n2yo7b --discovery-token-ca-cert-hash sha256:ed678b5331259917248c966bf387e6aaf9f588798fb3977090fd6203780ceca9
2) The next step is to replicate the generation of this token, respectively, in the working nodes Node01 and Node02 for the implementation of the successful addition of the cluster prompt for:
[!IMPORTANT]
This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Note: If theoperating nodeJoining a cluster with an error can be added--ignore-preflight-errors=SystemVerification
Ignore meet errors as follows:
kubeadm join 192.168.116.131:6443 --token xxiuik.9axtcp5xk3n2yo7b --discovery-token-ca-cert-hash sha256:ed678b5331259917248c966bf387e6aaf9f588798fb3977090fd6203780ceca9 --ignore-preflight-errors=SystemVerification
2) Setting up a user'skubectl
environment that enables it to interact with a Kubernetes cluster:
mkdir ~/.kube
cp /etc/kubernetes/ ~/.kube/config
-
kubectl
The default will be in the user's home directory in the.kube/config
file to find connection information for the Kubernetes cluster. If this file does not exist, thekubectl
No configuration information pointing to the API server will be found. - If you have not executed the two commands above, the
kubectl
There is no profile available, causing it to try to connect to the default API server addresshttp://localhost:8080
。
If you do not configure the user's kubectl environment, the following error occurs when viewing the node status:
[!WARNING]
E1004 22:30:56.770509 34971 :265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E1004 22:30:56.777399 34971 :265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E1004 22:30:56.780040 34971 :265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E1004 22:30:56.781809 34971 :265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E1004 22:30:56.783489 34971 :265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Finally check the status of all nodes (either at the control node or the worker node):
kubectl get nodes
[!IMPORTANT]
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 68m v1.28.2
node01 NotReady11m v1.28.2
node02 NotReady21m v1.28.2
5. Install k8s network component Calico
Calico is a popular open source networking solution designed to provide efficient, scalable, and secure network connectivity for Kubernetes. It uses an IP-based network model , so that each Pod can get a unique IP address , thus simplifying network management . Calico supports a variety of network policies , you can achieve fine-grained traffic control and security policies , such as label-based access control , allowing users to define which Pods can communicate with each other. (Calico supports a variety of network policies for fine-grained traffic control and security policies, such as label-based access control that allows users to define which Pods can communicate with each other.)
1) Install calico on each of the three hosts:
ctr image pull /ddn-k8s//calico/cni:v3.25.0
ctr image pull /ddn-k8s//calico/pod2daemon-flexvol:v3.25.0
ctr image pull /ddn-k8s//calico/node:v3.25.0
ctr image pull /ddn-k8s//calico/kube-controllers:v3.25.0
ctr image pull /ddn-k8s//calico/typha:v3.25.0
2) control node to download calico3.25.0 yaml configuration file (download failed to copy the URL to the browser, manually copy and paste to the Master node the same effect)
curl -O -L /projectcalico/calico/v3.25.0/manifests/
3) Edit, find the CLUSTER_TYPE line and add a pair of key-value pairs underneath to ensure that the NIC interface is used (note the indentation):
Original configuration:
- name: CLUSTER_TYPE
value: "k8s,bgp"
New Configuration:
- name: CLUSTER_TYPE
value: "k8s,bgp"
- name: IP_AUTODELECTION_METHOD
value: "interface=ens160"
Note: The name of the network card of different operating systems have differences, for example: centos7.9 the name of the network card for the ens33, you have to fill in the value: "interface=ens33", need to be flexible.
Note: If there is a calico pull image error problem, you may not have modified the imagePullPresent rule, you can modify the official source download to Huawei source download as follows:
sed -i '1,$s|/calico/cni:v3.25.0|/ddn-k8s//calico/cni:v3.25.0|g'
sed -i '1,$s|/calico/node:v3.25.0|/ddn-k8s//calico/node:v3.25.0|g'
sed -i '1,$s|/calico/kube-controllers:v3.25.0|/ddn-k8s//calico/kube-controllers:v3.25.0|g'
4) Deployment of calico web services
kubectl apply -f
Viewing a Kubernetes cluster that belongs to thekube-system
Details of all Pods in the namespace (both control and worker nodes are checked):
kubectl get pod --namespace kube-system -o wide
The message that calico was successfully installed is roughly as follows:
[!IMPORTANT]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-665548954f-99gbl 1/1 Running 0 69s 10.251.205.131 master
calico-node-57bg8 1/1 Running 0 69s 192.168.116.132 node01
calico-node-lfqtb 1/1 Running 0 69s 192.168.116.133 node02
calico-node-vqg9b 1/1 Running 0 69s 192.168.116.131 master
coredns-66f779496c-44t4m 1/1 Running 0 13h 10.251.205.130 master
coredns-66f779496c-vmwdj 1/1 Running 0 13h 10.251.205.129 master
etcd-master 1/1 Running 0 13h 192.168.116.131 master
kube-apiserver-master 1/1 Running 0 13h 192.168.116.131 master
kube-controller-manager-master 1/1 Running 0 13h 192.168.116.131 master
kube-proxy-6v262 1/1 Running 1 12h 192.168.116.133 node02
kube-proxy-s84wz 1/1 Running 0 13h 192.168.116.131 master
kube-proxy-z8k5d 1/1 Running 0 12h 192.168.116.132 node01
kube-scheduler-master 1/1 Running 0 13h 192.168.116.131 master
III. Summary
Deployment success and unsuccessful trouble feedback, I will make optimization adjustments.
▃▆█▇▄▖
▟◤▖ ◥█▎
◢◤ ▐ ▐▉
▗◤ ▂ ▗▖▕█▎
◤ ▗▅▖◥▄ ▀◣█▊
▐ ▕▎◥▖◣◤◢██
█◣ ◥▅█▀▐██◤
▐█▙▂ ◢██◤
◥██◣◢▄◤
▀██▅▇▀