Location>code7788 >text

Attachment 038.Kubernetes_v1.30.3 Highly Available Deployment Architecture II

Popularity:531 ℃/2024-08-14 13:48:51

Deployment Components

The Kubernetes deployment process involves a number of components for deployment, mainly kubeadm, kubelet, and kubectl.

Introduction to kubeadm

Kubeadm provides a convenient and efficient "best practice" for building Kubernetes. The tool provides the components needed to initialize a complete Kubernetes process, and its main commands and features are:

  • kubeadm init: Used to build a Kubernetes control plane node;
  • kubeadm join: Used to build Kubernetes worker nodes and add them to the cluster;
  • kubeadm upgrade: Used to upgrade a Kubernetes cluster to a new version;
  • kubeadm token: Used to manage the tokens used by kubeadm join;
  • kubeadm reset: Used to revert (reset) any changes made to a node via the kubeadm init or kubeadm join commands;
  • kubeadm certs: Used to manage Kubernetes certificates;
  • kubeadm kubeconfig: Used to manage kubeconfig files;
  • kubeadm version: Used to display (query) the version information of kubeadm;
  • kubeadm alpha: Used to preview the current kubeadm features from the feedback collected from the community.

More references:Introduction to Kubeadm

Introduction to kubelet

A kubelet is a core component of a Kubernetes cluster that operates the runtime of containers such as Docker and containerd and needs to be run on each node. Typically, this operation is implemented based on the CRI, and kubelet interacts with the CRI to enable control of Kubernetes.

kubelet is mainly used to configure the container network, manage the container data volume and other container full life cycle, for kubelet, its main function core has:

  • Pod update event;
  • Pod Lifecycle Management;
  • Report Node node information.

More references:Introduction to kubelet

Introduction to kubectl

kubectl controls the Kubernetes cluster manager and is used as a command-line tool for Kubernetes to communicate with apiserver to deploy and manage applications on Kubernetes using the kubectl utility.
Using kubectl, you can check the creation, deletion, and update components of a cluster resource.
A number of subcommands have been integrated to make it easier to manage a Kubernetes cluster, the main commands are listed below:

  • Kubetcl -h: show subcommands;
  • kubectl option: View global options;
  • kubectl <command> --help: view subcommand help information;
  • kubelet [command] [PARAMS] -o=<format>: set output format, e.g. json, yaml, etc;
  • Kubetcl explain [RESOURCE]: view the definition of a resource.

More references:Introduction to kubectl

Program overview

Program Introduction

This solution is based on the kubeadm deployment tool to realize a highly available Kubernetes cluster for a complete production environment, while providing related Kubernetes peripheral components.
Its key messages are listed below:

  • Version: Kubernetes version 1.29.2;
  • kubeadm: Deploy Kubernetes with kubeadm;
  • OS:CentOS 8;
  • etcd: using the fusion approach;
  • Nginx: Runs as a Pod on top of Kubernetes, i.e. in Kubernetes mode, providing reverse proxy to 3 master 6443 ports;
  • KeepAlived: Used to achieve high availability of apiserver;
  • Other major deployment components include:
    • Metrics: Metrics component, used to provide relevant monitoring metrics;
    • Dashboard: the front-end graphical interface for a Kubernetes cluster;
    • Helm: The Kubernetes Helm package manager tool for subsequent rapid deployment of applications using the helm integration package;
    • Ingress: a Kubernetes service exposure application used to provide Layer 7 load balancing, similar to Nginx, with the ability to create multiple mapping rules both externally and internally;
    • containerd: when the underlying Kubernetes container;
    • Longhorn: A Kubernetes dynamic storage component to provide persistent storage for Kubernetes.

Tip: The scripts used in the deployment of this program are provided by me and may be updated from time to time.

Deployment planning

knot plan

Node hostname IP typology Operational services
master01 172.24.10.11 Kubernetes master node kubeadm、kubelet、kubectl、KeepAlived、
containerd、etcd、kube-apiserver、kube-scheduler、
kube-controller-manager, calico, WebUI, metrics, ingress, Longhorn ui nodes
master02 172.24.10.12 Kubernetes master node kubeadm、kubelet、kubectl、KeepAlived、
containerd、etcd、kube-apiserver、kube-scheduler、
kube-controller-manager, calico, WebUI, metrics, ingress, Longhorn ui nodes
master03 172.24.10.13 Kubernetes master node kubeadm、kubelet、kubectl、KeepAlived、
containerd、etcd、kube-apiserver、kube-scheduler、
kube-controller-manager, calico, WebUI, metrics, ingress, Longhorn ui nodes
worker01 172.24.10.14 Kubernetes worker nodes kubelet, containerd, calico, Longhorn storage nodes
worker02 172.24.10.15 Kubernetes worker nodes kubelet, containerd, calico, Longhorn storage nodes
worker03 172.24.10.16 Kubernetes worker nodes kubelet, containerd, calico, Longhorn storage nodes
worker04 172.24.10.17 Kubernetes worker nodes kubelet, containerd, calico, Longhorn storage nodes

Kubernetes cluster high availability mainly refers to the high availability of the control plane, multiple Master node components (usually an odd number) and Etcd components, and worker nodes connecting to the Master through the front-end load balancing VIP.

架构图

Characterizing the hybrid deployment approach of etcd and Master node components in Kubernetes high availability architecture:

  • Requires fewer server nodes and features a hyper-converged architecture
  • Easy to deploy and manage
  • Easy to scale horizontally
  • High Availability with etcd Multiplexed Kubernetes
  • There is a certain risk, such as a master host hangs, master and etcd are missing a node, the cluster redundancy is affected by a certain amount of

Tip: This experiment uses Keepalived+Nginx architecture to achieve high availability of Kubernetes.

Hostname Configuration

All node hostnames need to be configured accordingly.

[root@localhost ~]# hostnamectl set-hostname master01 # Other nodes modified in turn

Production environments usually recommend deploying a dns server on the intranet and using a dns server for resolution. This guide uses local hosts file names for resolution.
The following hosts file modifications need to be performed only on master01, with subsequent use of bulk distribution to all other nodes.

[root@master01 ~]# cat >> /etc/hosts << EOF
172.24.10.11 master01
172.24.10.12 master02
172.24.10.13 master03
172.24.10.14 worker01
172.24.10.15 worker02
172.24.10.16 worker03
EOF

Variable preparation

For automated deployment and automated distribution of relevant files, define relevant hostnames, IP groups, variables, etc. in advance.

[root@master01 ~]# wget /mydeploy/k8s/v1.30.3/

[root@master01 ~]# vi #Confirm the hostname and IP of the relevant host.
#! /bin/sh
#****************************************************************#
# ScriptName.
# Author: xhy
# Create Date: 2022-10-11 17:10
# Modify Author: xhy
# Modify Date: 2023-11-30 23:00
# Version: v1
#***************************************************************#

# Cluster MASTER machine IP array
export MASTER_IPS=(172.24.10.11 172.24.10.12 172.24.10.13)

# Array of hostnames corresponding to cluster MASTER IPs
export MASTER_NAMES=(master01 master02 master03)

# Array of cluster NODE machine IPs
export NODE_IPS=(172.24.10.14 172.24.10.15 172.24.10.16)

# Array of hostnames corresponding to cluster NODE IPs
export NODE_NAMES=(worker01 worker02 worker03)

# Array of IPs of all machines in the cluster
export ALL_IPS=(172.24.10.11 172.24.10.12 172.24.10.13 172.24.10.14 172.24.10.15 172.24.10.16)

# Hostname array for all IPs in the cluster
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)

Mutual Confidence Configuration

In order to facilitate the remote distribution of files and execution of commands, this scheme configures the master01 node to the other nodes of the ssh trust relationship, that is, free of secret key management of all other nodes.

[root@master01 ~]# source #Load variables
    
[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@${all_ip}
  done
  
[root@master01 ~]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@${all_name}
  done

Tip: This operation is only required on the master01 node.

Environment initialization

kubeadm itself is only used for deploying Kubernetes clusters, and you need to prepare the operating system environment, i.e., the environment initialization preparation, before you formally use kubeadm to deploy Kubernetes clusters.
The initialization of the environment is prepared for this solution automatically using a script.
Use the following script to initialize the base environment with the main functions:

  • Installing containerd, the underlying container component of the Kubernetes platform
  • Turn off SELinux and firewalls
  • Optimizing relevant kernel parameters and tuning the configuration of the base system for Kubernetes clusters in production environments
  • Turn off swap
  • Setting up related modules, mainly forwarding modules
  • Configure the relevant base software and deploy the base dependency packages required by the Kubernetes cluster
[root@master01 ~]# wget /mydeploy/k8s/v1.30.3/

[root@master01 ~]# vim 
#!/bin/sh
#****************************************************************#
# ScriptName: 
# Author: xhy
# Create Date: 2020-05-30 16:30
# Modify Author: xhy
# Modify Date: 2024-02-28 22:38
# Version: v1
#***************************************************************#
# Initialize the machine. This needs to be executed on every machine.
rm -f /var/lib/rpm/__db.00*
rpm -vv --rebuilddb
#yum clean all 
#yum makecache
sleep 3s
# Install containerd
CONVERSION=1.6.32
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo /docker-ce/linux/centos/
sudo sed -i 's++/docker-ce+' /etc//
sleep 3s
yum -y install -${CONVERSION}
mkdir /etc/containerd

cat > /etc/containerd/ <<EOF
disabled_plugins = ["restart"]

[]
shim_debug = true

[plugin.".".]
  [plugin."."..""]
        endpoint = [""]

[plugins.".".]
SystemdCgroup = true

[]
sandbox_image = "registry./pause:3.9"
EOF

cat > /etc/ <<EOF
runtime-endpoint: unix:///run/containerd/
image-endpoint: unix:///run/containerd/
timeout: 10
debug: false
EOF

systemctl restart containerd
systemctl enable containerd --now
systemctl status containerd

# Disable the SELinux.
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# Turn off and disable the firewalld.
systemctl stop firewalld
systemctl disable firewalld

# Modify related kernel parameters & Disable the swap.
cat > /etc// << EOF
net.ipv4.ip_forward = 1
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.tcp_tw_recycle = 0
 = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net..disable_ipv6 = 1
EOF
sysctl -p /etc// >&/dev/null
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
modprobe br_netfilter
modprobe overlay

# Add ipvs modules
cat > /etc/sysconfig/modules/ <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- nf_conntrack
modprobe -- br_netfilter
modprobe -- overlay
EOF

chmod 755 /etc/sysconfig/modules/
bash /etc/sysconfig/modules/

# Install rpm
yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget iproute-tc

# Update kernel
# rpm --import /
# rpm -Uvh /elrepo-release-7.
# mv -b /etc// /etc//backup
# wget -c /myoptions/ -O /etc// 
# yum --disablerepo="*" --enablerepo="elrepo-kernel" install -y kernel-ml
# sed -i 's/^GRUB_DEFAULT=.*/GRUB_DEFAULT=0/' /etc/default/grub
# grub2-mkconfig -o /boot/grub2/
# yum -y --exclude=docker* update

# Reboot the machine.
# reboot

Tip: This operation is only required on the master01 node.

  • For some features, it may be necessary to upgrade the kernel, which can be done atUpgrade Kernel
  • Kernel nf_conntrack_ipv4 has been changed to nf_conntrack for version 4.19 and above.
[root@master01 ~]# source 
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp -rp /etc/hosts root@${all_ip}:/etc/hosts
    scp -rp  root@${all_ip}:/root/
    ssh root@${all_ip} "bash /root/"
  done

Deploying highly available components

HAProxy Installation

HAProxy is available to provide high availability, load balancing and based on TCP (so you can reverse proxy applications such as kubeapiserver) and HTTP application proxy, support for virtual hosts, it is free, fast and reliable a highly available solution.

[root@master01 ~]# wget /haproxy/3.0/src/haproxy-3.0.

[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl3 libnl3-devel libnfnetlink openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel"
    scp -rp haproxy-3.0. root@${master_ip}:/root/
    ssh root@${master_ip} "tar -zxvf haproxy-3.0."
    ssh root@${master_ip} "cd haproxy-3.0.3/ && make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy"
    ssh root@${master_ip} "cp /usr/local/haproxy/sbin/haproxy /usr/sbin/"
    ssh root@${master_ip} "useradd -r haproxy && usermod -G haproxy haproxy"
    ssh root@${master_ip} "mkdir -p /etc/haproxy && mkdir -p /etc/haproxy/ && cp -r /root/haproxy-3.0.3/examples/errorfiles/ /usr/local/haproxy/"
  done

Tip: The official Haproxy reference:/

KeepAlived Installation

KeepAlived is an LVS service high availability solution based on the VRRP protocol to solve the problem of a single point of failure in static routes.
In this scenario, all three master nodes are deployed and running Keepalived, one is the master server (MASTER) and the other two are backup servers (BACKUP).
The Master cluster outperforms as a VIP. The Master server sends a specific message to the Backup server. When the Backup server does not receive this message, i.e., when the Master server is down, the Backup server takes over the virtual IP and continues to provide the service, thus ensuring high availability.

[root@master01 ~]# wget /software/keepalived-2.3.
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "yum -y install curl gcc gcc-c++ make libnl3 libnl3-devel libnfnetlink openssl-devel"
    scp -rp keepalived-2.3. root@${master_ip}:/root/
    ssh root@${master_ip} "tar -zxvf keepalived-2.3."
    ssh root@${master_ip} "cd keepalived-2.3.1/ && LDFLAGS=\"$LDFAGS -L /usr/local/openssl/lib/\" ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
    ssh root@${master_ip} "systemctl enable keepalived"
  done

Tip: As above, only the Master01 node needs to be operated, so as to realize the automated installation of all nodes. If the following error occurs: undefined reference to `OPENSSL_init_ssl', bring the path to the openssl lib:

LDFLAGS="$LDFAGS -L /usr/local/openssl/lib/" ./configure --sysconf=/etc --prefix=/usr/local/keepalived

Hint: KeepAlive official reference:/

Creating Configuration Files

Create the configuration of the relevant components required for cluster deployment, using scripts to automate the creation of the relevant configuration files.

[root@master01 ~]# wget /mydeploy/k8s/v1.30.3/ #Pull automated deployment scripts

[root@master01 ~]# vim
#!/bin/sh
#****************************************************************#
# ScriptName: k8sconfig
# Author: xhy
# Create Date: 2022-06-08 20:00
# Modify Author: xhy
# Modify Date: 2024-02-25 23:57
# Version: v3
#***************************************************************#

#######################################
# set variables below to create the config files, all files will create at ./kubeadm directory
#######################################

# master keepalived virtual ip address
export K8SHA_VIP=172.24.10.100

# master01 ip address
export K8SHA_IP1=172.24.10.11

# master02 ip address
export K8SHA_IP2=172.24.10.12

# master03 ip address
export K8SHA_IP3=172.24.10.13

# master01 hostname
export K8SHA_HOST1=master01

# master02 hostname
export K8SHA_HOST2=master02

# master03 hostname
export K8SHA_HOST3=master03

# master01 network interface name
export K8SHA_NETINF1=eth0

# master02 network interface name
export K8SHA_NETINF2=eth0

# master03 network interface name
export K8SHA_NETINF3=eth0

# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d

# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0/16

# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0/16

# kubernetes CIDR pod mtu
export K8SHA_PODMTU=1450

##############################
# please do not modify anything below
##############################

mkdir -p kubeadm/$K8SHA_HOST1/{keepalived,haproxy}
mkdir -p kubeadm/$K8SHA_HOST2/{keepalived,haproxy}
mkdir -p kubeadm/$K8SHA_HOST3/{keepalived,haproxy}
mkdir -p kubeadm/keepalived
mkdir -p kubeadm/haproxy

echo "create directory files success."

# wget all files
wget -c -P kubeadm/keepalived/ /mydeploy/k8s/common/
wget -c -P kubeadm/keepalived/ /mydeploy/k8s/common/check_apiserver.sh
wget -c -P kubeadm/haproxy/ /mydeploy/k8s/common/
wget -c -P kubeadm/haproxy/ /mydeploy/k8s/common/
wget -c -P kubeadm/ /mydeploy/k8s/v1.30.3/
wget -c -P kubeadm/calico/ /mydeploy/k8s/calico/v3.28.1/
wget -c -P kubeadm/ /mydeploy/k8s/v1.30.3/

echo "down files success."

# create all files
sed \
-e "s/K8SHA_HOST1/${K8SHA_HOST1}/g" \
-e "s/K8SHA_HOST2/${K8SHA_HOST2}/g" \
-e "s/K8SHA_HOST3/${K8SHA_HOST3}/g" \
-e "s/K8SHA_IP1/${K8SHA_IP1}/g" \
-e "s/K8SHA_IP2/${K8SHA_IP2}/g" \
-e "s/K8SHA_IP3/${K8SHA_IP3}/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s!K8SHA_PODCIDR!${K8SHA_PODCIDR}!g" \
-e "s!K8SHA_SVCCIDR!${K8SHA_SVCCIDR}!g" \
kubeadm/ > kubeadm/

echo "create files success."

# create all keepalived files
chmod u+x kubeadm/keepalived/check_apiserver.sh
cp kubeadm/keepalived/check_apiserver.sh kubeadm/$K8SHA_HOST1/keepalived
cp kubeadm/keepalived/check_apiserver.sh kubeadm/$K8SHA_HOST2/keepalived
cp kubeadm/keepalived/check_apiserver.sh kubeadm/$K8SHA_HOST3/keepalived

sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP1}/g" \
-e "s/K8SHA_KA_PRIO/102/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
kubeadm/keepalived/ > kubeadm/$K8SHA_HOST1/keepalived/

sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP2}/g" \
-e "s/K8SHA_KA_PRIO/101/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
kubeadm/keepalived/ > kubeadm/$K8SHA_HOST2/keepalived/

sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF3}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP3}/g" \
-e "s/K8SHA_KA_PRIO/100/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
kubeadm/keepalived/ > kubeadm/$K8SHA_HOST3/keepalived/

echo "create keepalived files success. kubeadm/$K8SHA_HOST1/keepalived/"
echo "create keepalived files success. kubeadm/$K8SHA_HOST2/keepalived/"
echo "create keepalived files success. kubeadm/$K8SHA_HOST3/keepalived/"

# create all haproxy files
sed \
-e "s/K8SHA_IP1/$K8SHA_IP1/g" \
-e "s/K8SHA_IP2/$K8SHA_IP2/g" \
-e "s/K8SHA_IP3/$K8SHA_IP3/g" \
-e "s/K8SHA_HOST1/$K8SHA_HOST1/g" \
-e "s/K8SHA_HOST2/$K8SHA_HOST2/g" \
-e "s/K8SHA_HOST3/$K8SHA_HOST3/g" \
kubeadm/haproxy/ > kubeadm/haproxy/

echo "create haproxy files success. kubeadm/$K8SHA_HOST1/haproxy/"
echo "create haproxy files success. kubeadm/$K8SHA_HOST2/haproxy/"
echo "create haproxy files success. kubeadm/$K8SHA_HOST3/haproxy/"

# create calico yaml file
sed \
-e "s!K8SHA_PODCIDR!${K8SHA_PODCIDR}!g" \
-e "s!K8SHA_PODMTU!${K8SHA_PODMTU}!g" \
kubeadm/calico/ > kubeadm/calico/

echo "create calico file success."

# scp all file
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST1:/etc/haproxy/
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST2:/etc/haproxy/
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST3:/etc/haproxy/
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST1:/usr/lib/systemd/system/
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST2:/usr/lib/systemd/system/
scp -rp kubeadm/haproxy/ root@$K8SHA_HOST3:/usr/lib/systemd/system/

scp -rp kubeadm/$K8SHA_HOST1/keepalived/* root@$K8SHA_HOST1:/etc/keepalived/
scp -rp kubeadm/$K8SHA_HOST2/keepalived/* root@$K8SHA_HOST2:/etc/keepalived/
scp -rp kubeadm/$K8SHA_HOST3/keepalived/* root@$K8SHA_HOST3:/etc/keepalived/

echo "scp haproxy & keepalived file success."

chmod u+x kubeadm/*.sh

[root@master01 ~]# bash

Explanation: As above only Master01 node operation is required. Executing the script produces the following list of configuration files:

  • : The kubeadm initialization configuration file, located in the kubeadm/ directory, can be found atkubeadm configuration
  • keepalived: keepalived configuration file, located in the /etc/keepalived directory of each master node
  • haproxy: the configuration file for haproxy, located in the /etc/haproxy/ directory of each master node
  • : calico network component deployment files, located in the kubeadm/calico/ directory
[root@master01 ~]# vim kubeadm/ #Check cluster initialization configuration
---
apiVersion: kubeadm./v1beta3
kind: ClusterConfiguration
serviceSubnet.
  serviceSubnet: "10.20.0.0/16" #Set the svc network segment
  podSubnet: "10.10.0.0/16" #Set the pod segment
  dnsDomain: ""
kubernetesVersion: "v1.30.3" #Set the installation version
controlPlaneEndpoint: "172.24.10.100:16443" #Set the related API VIP address
controlPlaneEndpoint: "172.24.10.100:16443" #set related API VIP address
  #Set the relevant API address. controlPlaneEndpoint.
  - 127.0.0.1
  - master01
  - master02
  - master03
  - 172.24.10.11
  - 172.24.10.12
  - 172.24.10.13
  - 172.24.10.100
  timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "registry."
#clusterName: "example-cluster"

---
apiVersion: . /v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

---
apiVersion: . /v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

Tip: As above only Master01 node operation is required, more config file reference:kubeadm configuration (v1beta3)
The default kubeadm configuration can be generated using kubeadm config print init-defaults >.

Starting services

Start the keepalive and HAProxy services to build high availability on the master node.

  • Checking the service configuration
[root@master01 ~]# cat /etc/keepalived/ #All nodes confirm relevantkeepaliveconfiguration file
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -60
    fall 2
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 172.24.10.11
    virtual_router_id 51
    priority 102
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass 412f7dc3bfed32194d1600c483e10ad1d
    }
    virtual_ipaddress {
        172.24.10.100
    }
    track_script {
       check_apiserver
    }
}
[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh #All nodes confirm relevantkeepaliveMonitoring script files
#!/bin/bash

# if check error then repeat check for 12 times, else exit
err=0
for k in $(seq 1 12)
do
    check_code=$(curl -k https://localhost:6443)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    # if apiserver is down send SIG=1
    echo 'apiserver error!'
    exit 1
else
    # if apiserver is up send SIG=0
    echo 'apiserver normal!'
    exit 0
fi
  • Starting Highly Available Services
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl enable --now && systemctl restart "
    ssh root@${master_ip} "systemctl enable --now && systemctl restart "
    ssh root@${master_ip} "systemctl status | grep Active"
    ssh root@${master_ip} "systemctl status | grep Active"
done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "ping -c1 172.24.10.100"
done #wait for10sImplementation check

Tip: As above, only the Master01 node needs to be operated so that all nodes can start the service automatically.

Cluster deployment

Related Component Packages

The following packages need to be installed on each machine:

  • kubeadm: Command used to initialize the cluster;
  • kubelet: Used to launch pods, containers, etc. on each node in the cluster;
  • kubectl: Command line tool used to communicate with the cluster.

kubeadm cannot install or manage kubelet or kubectl, so you must install kubelet and kubectl before initializing the cluster and ensure that they meet the versioning requirements of the Kubernetes control layer installed through kubeadm.
If the version does not fulfill the matching requirements, it may lead to some unexpected errors or problems.
For specific related component installations see;With introduction and instruction book

Tip: Kubernetes version 1.29.2 version reference for all compatible corresponding components:/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.

formal installation

Quickly install kubeadm, kubelet, and kubectl components for all nodes.

[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "cat <<EOF | sudo tee /etc//
[kubernetes]
name=Kubernetes
baseurl=/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=/kubernetes-new/core/stable/v1.30/rpm/repodata/
EOF"
    ssh root@${all_ip} "yum install -y kubelet-1.30.3-150500.1.1 kubectl-1.30.3-150500.1.1 --disableexcludes=kubernetes"
    ssh root@${all_ip} "yum install -y kubeadm-1.30.3-150500.1.1 --disableexcludes=kubernetes"
    ssh root@${all_ip} "systemctl enable kubelet"
done
[root@master01 ~]# yum search -y kubelet --showduplicates #View the appropriate version

Tip: As above only need Master01 node operation, so as to achieve automated installation of all nodes, at the same time, do not need to start kubelet, the initialization process will automatically start, if this time there will be an error start, ignore it.

Description: Installed cri-tools, kubernetes-cni, socat dependencies at the same time:
socat: kubelet dependencies;
cri-tools: that is, CRI (Container Runtime Interface) container runtime interface command line tools.

Cluster Initialization

Pulling Mirrors

The initialization process will pull a large number of mirrors, and the mirrors are located in foreign countries, there may be a situation that can not be pulled resulting in the failure of the initialization of Kubernetes. It is recommended to prepare mirrors in advance to ensure subsequent initialization.

[root@master01 ~]# kubeadm --kubernetes-version=v1.30.3 config images list     	#List required mirrors

[root@master01 ~]# vim kubeadm/
#!/bin/sh
#***************************************************************#
# ScriptName: v1.30.3/
# Author: xhy
# Create Date: 2024-08-08 22:00
# Modify Author: xhy
# Modify Date: 2024-08-08 22:00
# Version: v1
#***************************************************************#

KUBE_VERSION=v1.30.3
KUBE_PAUSE_VERSION=3.9
ETCD_VERSION=3.5.12-0
CORE_DNS_VERSION=v1.11.1
K8S_URL=registry.
UCLOUD_URL=/imxhy
LONGHORN_URL=longhornio
CALICO_URL='/calico'
CALICO_VERSION=v3.28.1
METRICS_SERVER_VERSION=v0.7.1
INGRESS_VERSION=v1.11.1
INGRESS_WEBHOOK_VERSION=v1.4.1
LONGHORN_VERSION=v1.6.2
LONGHORN_VERSION2=v0.0.37
CSI_ATTACHER_VERSION=v4.5.1
CSI_NODE_DRIVER_VERSION=v2.9.2
CSI_PROVISIONER_VERSION=v3.6.4
CSI_RESIZER_VERSION=v1.10.1
CSI_SNAP_VERSION=v6.3.4
CSI_LIVE_VERSION=v2.12.0

mkdir -p k8simages/

# config node hostname
export ALL_IPS=(master02 master03 worker01 worker02 worker03)

kubeimages=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
)

for kubeimageName in ${kubeimages[@]} ; do
echo ${kubeimageName}
ctr -n  images pull ${UCLOUD_URL}/${kubeimageName}
ctr -n  images tag ${UCLOUD_URL}/${kubeimageName} ${K8S_URL}/${kubeimageName}
ctr -n  images rm ${UCLOUD_URL}/${kubeimageName}
ctr -n  images export k8simages/${kubeimageName}\.tar ${K8S_URL}/${kubeimageName}
done

corednsimages=(coredns:${CORE_DNS_VERSION}
)

for corednsimageName in ${corednsimages[@]} ; do
echo ${corednsimageName}
ctr -n  images pull ${UCLOUD_URL}/${corednsimageName}
ctr -n  images tag ${UCLOUD_URL}/${corednsimageName} ${K8S_URL}/coredns/${corednsimageName}
ctr -n  images rm ${UCLOUD_URL}/${corednsimageName}
ctr -n  images export k8simages/${corednsimageName}\.tar ${K8S_URL}/coredns/${corednsimageName}
done

calimages=(cni:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})

for calimageName in ${calimages[@]} ; do
echo ${calimageName}
ctr -n  images pull ${UCLOUD_URL}/${calimageName}
ctr -n  images tag ${UCLOUD_URL}/${calimageName} ${CALICO_URL}/${calimageName}
ctr -n  images rm ${UCLOUD_URL}/${calimageName}
ctr -n  images export k8simages/${calimageName}\.tar ${CALICO_URL}/${calimageName}
done

metricsimages=(metrics-server:${METRICS_SERVER_VERSION})

for metricsimageName in ${metricsimages[@]} ; do
echo ${metricsimageName}
ctr -n  images pull ${UCLOUD_URL}/${metricsimageName}
ctr -n  images tag ${UCLOUD_URL}/${metricsimageName} ${K8S_URL}/metrics-server/${metricsimageName}
ctr -n  images rm ${UCLOUD_URL}/${metricsimageName}
ctr -n  images export k8simages/${metricsimageName}\.tar ${K8S_URL}/metrics-server/${metricsimageName}
done

ingressimages=(controller:${INGRESS_VERSION}
kube-webhook-certgen:${INGRESS_WEBHOOK_VERSION}
)

for ingressimageName in ${ingressimages[@]} ; do
echo ${ingressimageName}
ctr -n  images pull ${UCLOUD_URL}/${ingressimageName}
ctr -n  images tag ${UCLOUD_URL}/${ingressimageName} ${K8S_URL}/ingress-nginx/${ingressimageName}
ctr -n  images rm ${UCLOUD_URL}/${ingressimageName}
ctr -n  images export k8simages/${ingressimageName}\.tar ${K8S_URL}/ingress-nginx/${ingressimageName}
done

longhornimages01=(longhorn-engine:${LONGHORN_VERSION}
longhorn-instance-manager:${LONGHORN_VERSION}
longhorn-manager:${LONGHORN_VERSION}
longhorn-ui:${LONGHORN_VERSION}
backing-image-manager:${LONGHORN_VERSION}
longhorn-share-manager:${LONGHORN_VERSION}
)

for longhornimageNameA in ${longhornimages01[@]} ; do
echo ${longhornimageNameA}
ctr -n  images pull ${UCLOUD_URL}/${longhornimageNameA}
ctr -n  images tag ${UCLOUD_URL}/${longhornimageNameA} ${LONGHORN_URL}/${longhornimageNameA}
ctr -n  images rm ${UCLOUD_URL}/${longhornimageNameA}
ctr -n  images export k8simages/${longhornimageNameA}\.tar ${LONGHORN_URL}/${longhornimageNameA}
done

longhornimages02=(support-bundle-kit:${LONGHORN_VERSION2})

for longhornimageNameB in ${longhornimages02[@]} ; do
echo ${longhornimageNameB}
ctr -n  images pull ${UCLOUD_URL}/${longhornimageNameB}
ctr -n  images tag ${UCLOUD_URL}/${longhornimageNameB} ${LONGHORN_URL}/${longhornimageNameB}
ctr -n  images rm ${UCLOUD_URL}/${longhornimageNameB}
ctr -n  images export k8simages/${longhornimageNameB}\.tar ${LONGHORN_URL}/${longhornimageNameB}
done

csiimages=(csi-attacher:${CSI_ATTACHER_VERSION}
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-provisioner:${CSI_PROVISIONER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
csi-snapshotter:${CSI_SNAP_VERSION}
livenessprobe:${CSI_LIVE_VERSION}
)

for csiimageName in ${csiimages[@]} ; do
echo ${csiimageName}
ctr -n  images pull ${UCLOUD_URL}/${csiimageName}
ctr -n  images tag ${UCLOUD_URL}/${csiimageName} ${LONGHORN_URL}/${csiimageName}
ctr -n  images rm ${UCLOUD_URL}/${csiimageName}
ctr -n  images export k8simages/${csiimageName}\.tar ${LONGHORN_URL}/${csiimageName}
done

allimages=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}
cni:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION}
metrics-server:${METRICS_SERVER_VERSION}
controller:${INGRESS_VERSION}
kube-webhook-certgen:${INGRESS_WEBHOOK_VERSION}
longhorn-engine:${LONGHORN_VERSION}
longhorn-instance-manager:${LONGHORN_VERSION}
longhorn-manager:${LONGHORN_VERSION}
longhorn-ui:${LONGHORN_VERSION}
backing-image-manager:${LONGHORN_VERSION}
longhorn-share-manager:${LONGHORN_VERSION}
support-bundle-kit:${LONGHORN_VERSION2}
csi-attacher:${CSI_ATTACHER_VERSION}
csi-node-driver-registrar:${CSI_NODE_DRIVER_VERSION}
csi-provisioner:${CSI_PROVISIONER_VERSION}
csi-resizer:${CSI_RESIZER_VERSION}
csi-snapshotter:${CSI_SNAP_VERSION}
livenessprobe:${CSI_LIVE_VERSION}
)
for all_ip in ${ALL_IPS[@]}
  do  
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir /root/k8simages"
    scp -rp k8simages/* root@${all_ip}:/root/k8simages/
  done

for allimageName in ${allimages[@]}
  do
  for all_ip in ${ALL_IPS[@]}
    do
    echo "${allimageName} copy to ${all_ip}"
    ssh root@${all_ip} "ctr -n  images import k8simages/${allimageName}\.tar"
    done
  done
  
[root@master01 ~]# bash kubeadm/                     #Confirmation version,Download the image in advance

Tip: As above, only the Master01 node needs to operate so as to realize the distribution of all node images.
Note the relevant version, e.g. the above script is the required image for the v1.30.3 Kubernetes version.

[root@master01 ~]# ctr -n images ls #validate
[root@master02 ~]# crictl images ls
IMAGE TAG IMAGE ID SIZE
/calico/cni v3.28.1 f6d76a1259a8c 94.6MB
/calico/kube-controllers v3.28.1 9d19dff735fa0 35MB
/calico/node v3.28.1 8bbeb9e1ee328 118MB
/longhornio/backing-image-manager v1.6.2 9b8cf5184bda1 133MB
/longhornio/csi-attacher v4.5.1 ebcde6f69ddda 27.5MB
/longhornio/csi-node-driver-registrar v2.9.2 438c692b0cb6d 10.8MB
/longhornio/csi-provisioner v3.6.4 cc753cf7b8127 28.7MB
/longhornio/csi-resizer v1.10.1 644d77abe33db 28.1MB
/longhornio/csi-snapshotter v6.3.4 eccecdceb86c0 26.9MB
/longhornio/livenessprobe v2.12.0 38ae1b6759b01 13.4MB
/longhornio/longhorn-engine v1.6.2 7fb50a1bbe317 142MB
/longhornio/longhorn-instance-manager v1.6.2 23292e266e0eb 272MB
/longhornio/longhorn-manager v1.6.2 6b0b2d18564be 112MB
/longhornio/longhorn-share-manager v1.6.2 f578840264031 81.1MB
/longhornio/longhorn-ui v1.6.2 b1c8e3638fc43 75.6MB
/longhornio/support-bundle-kit v0.0.37 df2168e6bf552 89.3MB
registry./coredns/coredns v1.11.1 cbb01a7bd410d 18.2MB
registry./etcd 3.5.12-0 3861cfcd7c04c 57.2MB
registry./ingress-nginx/controller v1.11.1 5a3c471280784 105MB
registry./ingress-nginx/kube-webhook-certgen v1.4.1 684c5ea3b61b2 23.9MB
registry./kube-apiserver v1.30.3 1f6d574d502f3 32.8MB
registry./kube-controller-manager v1.30.3 76932a3b37d7e 31.1MB
registry./kube-proxy v1.30.3 55bb025d2cfa5 29MB
registry./kube-scheduler v1.30.3 3edc18e7b7672 19.3MB
registry./metrics-server/metrics-server v0.7.1 a24c7c057ec87 19.5MB
registry./pause 3.9 e6f1816883972 319kB

Initialization on Master01

Initialization is performed on the Master01 node, i.e., a single-node Kubernetes is completed, and the other nodes are deployed using additions.

Tip: kubeadm init process will perform a system pre-check, pre-check passes continue init, you can also execute the following commands in advance of the pre-check operation:kubeadm init phase preflight

[root@master01 ~]# kubeadm init --config=kubeadm/ --upload-certs #Keep the following commands for subsequent node additions
[init] Using Kubernetes version: v1.30.3
[preflight] Running pre-flight checks
……
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/ $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.24.10.100:16443 --token 5ogx63.mjfb2mvyyebp30v6 \
	--discovery-token-ca-cert-hash sha256:35332dd14dac287b35b85af9fc03bd45af15d14248aa3c255dfc96abb1082021 \
	--control-plane --certificate-key 2a3eea130eb22d945cfee660c40a250731a1853e54bbf25ee13c0400d4a04ad1

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.24.10.100:16443 --token 5ogx63.mjfb2mvyyebp30v6 \
	--discovery-token-ca-cert-hash sha256:35332dd14dac287b35b85af9fc03bd45af15d14248aa3c255dfc96abb1082021

Note: As above the token has a default validity of 24 hours, the token and hash value can be obtained in the following way:
kubeadm token list
If the Token has expired, you can generate a new one by entering the following command.

kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Create the relevant Kubernetes cluster configuration file save directory.

[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/ $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master01 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF #set upKUBECONFIGenvironment variable
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master01 ~]# source ~/.bashrc

BONUS: The approximate steps of the initialization process are as follows:

  • [certs]: Generate the relevant certificates.
  • [control-plane]: create static Pod for Kubernetes control node
  • [etcd]: create static Pod for ETCD
  • [kubelet-start]: Generate kubelet configuration file "/var/lib/kubelet/".
  • [kubeconfig]: generate the relevant kubeconfig file
  • [bootstraptoken]: Generate a token to be used when adding nodes to the cluster using kubeadm join.
  • [addons]: related plug-ins that come with it

Tip: Initialization needs to be performed only on master01, if the initialization is abnormal you can pass thekubeadm reset -f kubeadm/ && rm -rf $HOME/.kube /etc/cni/ /etc/kubernetes/Reset.

Adding a Master node

Use kubeadm join to add other Master nodes to the cluster.

[root@master02 ~]# kubeadm join 172.24.10.100:16443 --token 5ogx63.mjfb2mvyyebp30v6 \
	--discovery-token-ca-cert-hash sha256:35332dd14dac287b35b85af9fc03bd45af15d14248aa3c255dfc96abb1082021 \
	--control-plane --certificate-key 2a3eea130eb22d945cfee660c40a250731a1853e54bbf25ee13c0400d4a04ad1
[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# sudo cp -i /etc/kubernetes/ $HOME/.kube/config
[root@master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master02 ~]# cat << EOF >> ~/.bashrc
export KUBECONFIG=$HOME/.kube/config
EOF #set upKUBECONFIGenvironment variable
[root@master02 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master02 ~]# source ~/.bashrc

Tip: master03 is also added to the controlplane of the current cluster as above.
If you add an exception you can pass thekubeadm reset -f kubeadm/ && rm -rf $HOME/.kube /etc/cni/ /etc/kubernetes/Reset.

Installation of NIC plug-in

NIC Plug-in Introduction

  • Calico is a secure L3 network and network policy provider.
  • Canal combines Flannel and Calico to provide networking and network policy.
  • Cilium is an L3 network and network policy plug-in that transparently enforces HTTP/API/L7 policies. Both routing and overlay/encapsulation modes are supported.
  • Contiv provides configurable networking (native L3 with BGP, overlay with vxlan, classic L2, and Cisco-SDN/ACI) and a rich policy framework for a wide range of use cases.The Contiv project is completely open source. The installation tool provides both kubeadm-based and non-kubeadm-based installation options.
  • Flannel is an overlay network provider that can be used with Kubernetes.
    +Romana is a Layer 3 solution for pod networking and supports the NetworkPolicy API. kubeadm add-on installation details can be found here.
  • Weave Net provides networks and network policies that work with participation at both ends of the network packet and do not require additional databases.
  • CNI-Genie enables Kubernetes to seamlessly connect to a CNI plugin such as Flannel, Calico, Canal, Romana, or Weave.

Tip: This program uses the Calico plugin.

Deployment of calico

Confirm the relevant configurations, such as MTU, NIC interface, and IP address segments of the Pod.
The original calico file can be found in the official:/projectcalico/calico/v3.27.2/manifests/

[root@master01 ~]# vim kubeadm/calico/ #Check Configuration
……
data:
……
  veth_mtu: "1450"
……
            - name: CALICO_IPV4POOL_CIDR
              value: "10.10.0.0/16" #configurePodnetwork segment
……
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eth.*" #Check the NICs between nodes
……
[root@master01 ~]# kubectl apply -f kubeadm/calico/
[root@master01 ~]# kubectl get pods --all-namespaces -o wide #View all deployedPod
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-77d59654f4-ld47h 1/1 Running 0 63s 10.10.59.194 master02 <none> <none>
kube-system calico-node-5x2qt 1/1 Running 0 63s 172.24.10.11 master01 <none> <none>
kube-system calico-node-hk9p6 1/1 Running 0 63s 172.24.10.12 master02 <none> <none>
kube-system calico-node-ttbr5 1/1 Running 0 63s 172.24.10.13 master03 <none> <none>
kube-system coredns-7db6d8ff4d-4swqk 1/1 Running 0 8m 10.10.59.195 master02 <none> <none>
kube-system coredns-7db6d8ff4d-zv4n9 1/1 Running 0 8m 10.10.59.193 master02 <none> <none>
kube-system etcd-master01 1/1 Running 0 8m12s 172.24.10.11 master01 <none> <none>
kube-system etcd-master02 1/1 Running 0 5m38s 172.24.10.12 master02 <none> <none>
kube-system etcd-master03 1/1 Running 0 5m46s 172.24.10.13 master03 <none> <none>
kube-system kube-apiserver-master01 1/1 Running 0 8m12s 172.24.10.11 master01 <none> <none>
kube-system kube-apiserver-master02 1/1 Running 0 5m47s 172.24.10.12 master02 <none> <none>
kube-system kube-apiserver-master03 1/1 Running 0 5m46s 172.24.10.13 master03 <none> <none>
kube-system kube-controller-manager-master01 1/1 Running 0 8m18s 172.24.10.11 master01 <none> <none>
kube-system kube-controller-manager-master02 1/1 Running 0 5m47s 172.24.10.12 master02 <none> <none>
kube-system kube-controller-manager-master03 1/1 Running 0 5m46s 172.24.10.13 master03 <none> <none>
kube-system kube-proxy-98dzr 1/1 Running 0 8m 172.24.10.11 master01 <none> <none>
kube-system kube-proxy-wcgld 1/1 Running 0 5m50s 172.24.10.13 master03 <none> <none>
kube-system kube-proxy-wf4tg 1/1 Running 0 5m50s 172.24.10.12 master02 <none> <none>
kube-system kube-scheduler-master01 1/1 Running 0 8m14s 172.24.10.11 master01 <none> <none>
kube-system kube-scheduler-master02 1/1 Running 0 5m47s 172.24.10.12 master02 <none> <none>
kube-system kube-scheduler-master03 1/1 Running 0 5m45s 172.24.10.13 master03 <none> <none>

[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane 8m25s v1.30.3
master02 Ready control-plane 4m52s v1.30.3
master03 Ready control-plane 4m49s v1.30.3

Hint: the official calico reference:/manifests/

Modify the node port range

The default port range of Kubernetes is 30000-32767, and the full port can be opened for a large number of later applications, such as ports 80 and 443 of ingress.
Also after opening the full port range, you need to be careful to avoid public ports, such as 8080, when using it.

[root@master01 ~]# vi /etc/kubernetes/manifests/ #Append port opening configuration
......
    --service-node-port-range=1-65535
......

Tip: As above you need to operate on all Master nodes.

Adding a Worker node

Adding a Worker node

[root@master01 ~]# source 
[root@master01 ~]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "kubeadm join 172.24.10.100:16443 --token 5ogx63.mjfb2mvyyebp30v6 \
	--discovery-token-ca-cert-hash sha256:35332dd14dac287b35b85af9fc03bd45af15d14248aa3c255dfc96abb1082021"
    ssh root@${node_ip} "systemctl enable "
  done

Tip: As above only need Master01 node operation, so as to realize all the Worker nodes added to the cluster, if add anomalies can be reset in the following way:

[root@worker01 ~]# kubeadm reset
[root@worker01 ~]# ifconfig kube-ipvs0 down
[root@worker01 ~]# ip link delete kube-ipvs0
[root@worker01 ~]# ifconfig tunl0@NONE down
[root@worker01 ~]# ip link delete tunl0@NONE
[root@worker01 ~]# rm -rf /var/lib/cni/

validate

[root@master01 ~]# kubectl get nodes #node state
[root@master01 ~]# kubectl get cs #Component Status
[root@master01 ~]# kubectl get serviceaccount #service account
[root@master01 ~]# kubectl cluster-info #Cluster Information
[root@master01 ~]# kubectl get pod -n kube-system -o wide #All service statuses

001
002

Tip: For more Kubetcl usage references:/docs/reference/kubectl/kubectl/
/docs/reference/kubectl/overview/
More kubeadm usage references:/docs/reference/setup-tools/kubeadm/kubeadm/

Metrics deployment

Introduction to Metrics

Early versions of Kubernetes relied on Heapster for complete performance data collection and monitoring functionality, and Kubernetes began to provide a standardized interface for performance data in the form of the Metrics API starting in version 1.8, and replaced Heapster with Metrics Server starting in version 1.10.In the In the new monitoring system of Kubernetes, Metrics Server is used to provide Core Metrics, including CPU and memory usage metrics of Node and Pod, while the monitoring of other Custom Metrics is done by Prometheus and other components.

Metrics Server is a scalable and efficient container resource metric that can often be used for Kubernetes' built-in auto-scaling, i.e., auto-scaling can be based on metrics metrics.

Metrics Server collects resource metrics from Kubelets and exposes them to the Kubernetes apisserver via the Metrics API for use by Pods for horizontal or vertical auto-scaling.
kubectl top also has access to the Metrics API, which allows you to view related object resource usage.

Tip: The current official recommendation is that Metrics Server is only used for auto-scaling, do not use it as a monitoring solution for Kubernetes, or as an upstream source for a monitoring solution, for a complete Kubernetes monitoring solution, collect directly from the /metrics/resource endpoint of the Kubelet metrics.

Metrics Server Suggested Scenarios

Scenarios for using Metrics Server.
CPU/memory based horizontal auto-scaling;
Automatically adjusts/suggests resources needed for containers (learn more about vertical autoscaling)

Metrics Server does not recommend scenarios

Scenarios where Metrics Server is not recommended.

  • Non-Kubernetes clusters;
  • An accurate basis for resource consumption of cluster resource objects;
  • Horizontal auto-scaling based on resources other than CPU/memory.

Metrics Features

Metrics Server Key Features.

  • It is possible to work with a single Pod on most clusters;
  • Fast auto-scaling and collects metrics every 15 seconds;
  • Resource consumption is extremely low, requiring only 1 slice of CPU and 2 MB of memory on each node in the cluster;
  • Scalable to support clusters of up to 5000 nodes.

Metrics Requirements

Metrics Server has specific requirement dependencies on cluster and network configurations that are not turned on by default for all clusters.
Before using Metrics Server, you need to ensure that the cluster supports these requirements:

  • kube-apiserver must have aggregation layer enabled;
  • The node must have Webhook authentication and authorization enabled;
  • Kubelet certificates need to be signed by a cluster certificate authority (or disable certificate validation by passing --kubelet-insecure-tls to Metrics Server);
  • Container metrics rpc must be implemented at container runtime (or have cAdvisor support);
  • The network shall support the following communications.
    • Control Plane to Metrics Server Communication Requirements: control plane nodes need to reach the Metrics Server's pod IP and port 10250 (if hostNetwork is turned on, it can be a customized node IP and corresponding customized port to maintain communication);
    • Kubelete communication requirements for Metrics Server to all nodes:Metrics Server needs to reach the node node address and Kubelet port. The address and port are configured in Kubelet and published as part of the Node object.... AND . Define the address and port (default 10250).Metrics Server will select the first node address based on the list provided by the kubelet-preferred-address-types command line flag (default InternalIP, ExternalIP, Hostname).

Enabling the Aggregation Layer

Reference for knowledge about the aggregation layer:/liukuan73/article/details/81352637
The kubeadm method of deployment is enabled by default.

Getting the deployment file

According to the actual production environment, personalized changes to the deployment of Metrics Server, other than keeping the default can be.
Mainly related to: deploying the number of replicas to 3, appending --kubelet-insecure-tls configuration.

[root@master01 ~]# mkdir metrics
[root@master01 ~]# cd metrics/
[root@master01 metrics]# wget /kubernetes-sigs/metrics-server/releases/latest/download/

[root@master01 metrics]# vi
……
apiVersion: apps/v1
kind: Deployment
……
spec:
  replicas: 3 #Adjust the number of replicas to the cluster size
    ……
    spec:
      hostNetwork: true #Add this trip
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=10300 #Modify the port #Modify the port
        - --kubelet-insecure-tls #Add this trip
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,InternalDNS,ExternalDNS #Modify thisargs
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry./metrics-server/metrics-server:v0.7.1
        imagePullPolicy: IfNotPresent
    ……
        ports:
        - containerPort: 10300
    ……

Tip: The default port of 10250 will be used by kubelet as a service listening port, so it is recommended to change the port.

Official deployment

[root@master01 metrics]# kubectl apply -f 
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP             NODE       NOMINATED NODE   READINESS GATES
metrics-server-78bd46cc84-lm9r7   1/1     Running   0          42s    172.24.10.15   worker02   <none>           <none>
metrics-server-78bd46cc84-qsxtf   1/1     Running   0          112s   172.24.10.14   worker01   <none>           <none>
metrics-server-78bd46cc84-zjsn6   1/1     Running   0          78s    172.24.10.16   worker03   <none>           <none>

View Resource Monitor

You can use kubectl top to view the relevant monitoring entries.

[root@master01 ~]# kubectl top nodes
[root@master01 ~]# kubectl top pods --all-namespaces

003

Tip: The data provided by Metrics Server can also be used by HPA controllers to enable automatic expansion and contraction of Pods based on CPU utilization or memory usage values. \
For more deployment references on metrics:
/docs/tasks/debug-application-cluster/resource-metrics-pipeline/
Turn on the Enable API Aggregation reference:
/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
API Aggregation Introduction Reference:
/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

Nginx ingress deployment

Applications in Kubernetes are usually exposed externally as Services, which are represented as IP:Port, i.e., they work at the TCP/IP layer.
For HTTP-based services, different URL addresses often correspond to different back-end services (RS) or virtual servers (Virtual Host), and these application-layer forwarding mechanisms are not possible through Kubernetes' Service mechanism alone.

Since Kubernetes version 1.1, the Ingress resource object has been added to forward access requests from different URLs to different Services in the back-end, in order to realize the business routing mechanism at the HTTP layer.
Kubernetes uses an Ingress Policy Rule and a specific Ingress Controller, which combine to implement a complete Ingress Load Balancer.
When using Ingress for load distribution, the Ingress Controller forwards client requests directly to the backend Endpoint (Pod) corresponding to the Service based on Ingress policy rules, thus skipping the forwarding function of kube-proxy, and kube-proxy no longer works.

Simple to understand is: ingress using DaemonSet or Deployment in the corresponding Node listening to 80 or 443, and then with the corresponding rules, because Nginx outside the bound host port 80 (like NodePort), itself in the cluster, then directly forwarded to the corresponding ServiceIP can realize the corresponding backward The following is a list of the rules that can be used to implement the service.
ingress controller + ingress policy rules ----> services.

At the same time, when the Ingress Controller provides external services, it actually fulfills the function of an edge router.

Typical architecture for HTTP layer routing:

017

Setting up labels

It is recommended that for non-business related applications, applications required to build a cluster (such as Ingress), be deployed on the master node, thus reusing the high availability of the master node.
The configuration of the ingress deployment on the master node is achieved using tags, combined with tolerations in the deployed yaml.

[root@master01 ~]# kubectl label nodes master0{1,2,3} ingress=enable

Access to resources

Get the yaml resources needed for deployment.

[root@master01 ~]# mkdir ingress
[root@master01 ~]# cd ingress/
[root@master01 ingress]# wget /kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/baremetal/

Hint: ingress official reference:/kubernetes/ingress-nginx
/ingress-nginx/deploy/

Modify Configuration

In order to facilitate the subsequent management and troubleshooting, the relevant Nginx ingress mounts the time zone, so that the Pod time is correct, so that the relevant record logs can be time-sensitive.
Also made simple configurations for ingress, such as log formatting.

[root@master01 ingress]# vi
    ……
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
  client-header-buffer-size: "512k" #Buffer size for client request header
  large-client-header-buffers: "4 512k" #Set the maximum value used to read large client request headersnumbercap (a poem)sizebuffer
  client-body-buffer-size: "128k" #Read client requestbody的bufferadults and children
  proxy-buffer-size: "256k" #act on behalf of sb. in a responsible positionbufferadults and children
  proxy-body-size: "50m"                                                        #act on behalf of sb. in a responsible positionbodyadults and children
  server-name-hash-bucket-size: "128" #服务器名称哈希adults and children
  map-hash-bucket-size: "128" #map哈希adults and children
  ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-R
SA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA
:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA" #SSLencryption suite
  ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2"                                        #ssl protocols
  log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id","remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "sta
tus":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":
 "$http_user_agent" }' #Log format
kind: ConfigMap
……
---
apiVersion: v1
kind: Service
metadata:
……
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
    nodePort: 80 #Add this trip
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
    nodePort: 443 #Add this trip
  selector:
    /component: controller
    /instance: ingress-nginx
    /name: ingress-nginx
  type: NodePort
  externalTrafficPolicy: Local #Add this trip
……
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    /component: controller
    /instance: ingress-nginx
    /name: ingress-nginx
    /part-of: ingress-nginx
    /version: 1.11.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 3 #Configure the number of copies
……
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
……
        image: registry./ingress-nginx/controller:v1.11.1 #modificationsimagemirroring
……
        volumeMounts:
……
        - mountPath: /etc/localtime #mountlocaltime
          name: timeconfig
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        /os: linux
        ingress: enable
      tolerations:
        - key: /control-plane
          effect: NoSchedule #an additional posthumous titlenodeSelectorcap (a poem)tolerations
……
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
      - name: timeconfig #commander-in-chief (military)hostpath配置为mount卷
        hostPath:
          path: /etc/localtime
……
        image: registry./ingress-nginx/kube-webhook-certgen:v1.4.1 #modificationsimagemirroring
……
        image: registry./ingress-nginx/kube-webhook-certgen:v1.4.1 #modificationsimagemirroring
……

[root@master01 ingress]# kubectl apply -f

Tip: To add a default backend, you need to wait for the default-backend to finish creating the controllers before you can deploy it successfully, and the new version of ingress no longer recommends adding a default backend.

validate

Check the progress of the Pod deployment to see if it completed successfully.

[root@master01 ingress]# kubectl get pods -n ingress-nginx -o wide
[root@master01 ingress]# kubectl get svc -n ingress-nginx -o wide

004

Hint: Refer to the documentation:/kubernetes/ingress-nginx/blob/master/docs/deploy/

Longhorn Storage Deployment

Longhorn Overview

Longhorn is an open source distributed block storage system for Kubernetes.
The current Kubernetes 1.30.3 version recommends using Longhorn 1.6.2 .

Hint: For more introductory references:/longhorn/longhorn

Basic Software Installation

Subsequent business applications may run on any node location, and the mount operation needs to be executable on any node.
All nodes are required to install the Foundation software.

[root@master01 ~]# source 
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "yum -y install iscsi-initiator-utils &"
    ssh root@${all_ip} "systemctl enable iscsid --now"
  done

Tip: All nodes need to be installed.

Setting up labels

GUI for deploying storage components on the Master node.

[root@master01 ~]# kubectl label nodes master0{1,2,3} longhorn-ui=enabled

Tip: The ui GUI reusable master is highly available, so it is deployed on the master node.

Prepare the disk

With Longhorn's distributed storage, it is recommended that separate disk devices be dedicated as storage volumes that can be mounted in advance.
Longhorn uses /var/lib/longhorn/ as the device path by default, and can mount the /dev/nvme0n2 device in advance.
The device name of the bare disk is different in different environments, according to the actual environment.

[root@master01 ~]# source 

[root@master01 ~]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} " -f /dev/nvme0n2 && mkdir -p /var/lib/longhorn/ && echo '/dev/nvme0n2        /var/lib/longhorn        xfs        defaults        0 0' >> /etc/fstab && mount -a"
  done

Configuring Longhorn

Optimize the configuration of Longhorn according to the actual production environment.

[root@master01 ~]# mkdir longhorn
[root@master01 ~]# cd longhorn/
[root@master01 longhorn]# wget /longhorn/longhorn/v1.6.2/deploy/

[root@master01 longhorn]# vi
……
---
# Source: longhorn/templates/
apiVersion: apps/v1
kind: Deployment
……
spec:
  replicas: 2
……
      containers:
      - name: longhorn-ui
        image: longhornio/longhorn-ui:v1.6.0
……
      nodeSelector:
        longhorn-ui: enabled #Additional labeling options
      tolerations:
        - key: /control-plane #Adding tolerance
          effect: NoSchedule
……

Official deployment

Deployment based on optimized yaml.

[root@master01 ~]# cd  longhorn/
[root@master01 longhorn]# kubectl apply -f 
[root@master01 longhorn]# kubectl -n longhorn-system get pods -o wide
NAME                                                READY   STATUS    RESTARTS        AGE     IP             NODE       NOMINATED NODE   READINESS GATES
csi-attacher-57689cc84b-55vw6                       1/1     Running   0               104s    10.10.19.82    worker03   <none>           <none>
csi-attacher-57689cc84b-gxm62                       1/1     Running   0               105s    10.10.30.82    worker02   <none>           <none>
csi-attacher-57689cc84b-jtqdw                       1/1     Running   0               104s    10.10.5.17     worker01   <none>           <none>
csi-provisioner-6c78dcb664-cswfz                    1/1     Running   0               104s    10.10.19.83    worker03   <none>           <none>
csi-provisioner-6c78dcb664-kdnhc                    1/1     Running   0               104s    10.10.5.18     worker01   <none>           <none>
csi-provisioner-6c78dcb664-vqs4z                    1/1     Running   0               104s    10.10.30.81    worker02   <none>           <none>
csi-resizer-7466f7b45f-8lbhp                        1/1     Running   0               104s    10.10.5.21     worker01   <none>           <none>
csi-resizer-7466f7b45f-9g7rw                        1/1     Running   0               104s    10.10.30.83    worker02   <none>           <none>
csi-resizer-7466f7b45f-xzsgs                        1/1     Running   0               104s    10.10.19.81    worker03   <none>           <none>
csi-snapshotter-58bf69fbd5-5b59k                    1/1     Running   0               104s    10.10.19.85    worker03   <none>           <none>
csi-snapshotter-58bf69fbd5-7q25t                    1/1     Running   0               104s    10.10.30.85    worker02   <none>           <none>
csi-snapshotter-58bf69fbd5-rprpq                    1/1     Running   0               104s    10.10.5.19     worker01   <none>           <none>
engine-image-ei-acb7590c-9b8wf                      1/1     Running   0               116s    10.10.30.79    worker02   <none>           <none>
engine-image-ei-acb7590c-bbcw9                      1/1     Running   0               116s    10.10.19.79    worker03   <none>           <none>
engine-image-ei-acb7590c-d4qlp                      1/1     Running   0               116s    10.10.5.15     worker01   <none>           <none>
instance-manager-0cf302d46e3eaf0be2c65de14febecb3   1/1     Running   0               110s    10.10.30.80    worker02   <none>           <none>
instance-manager-652604acb4423fc91cae625c664b813b   1/1     Running   0               2m21s   10.10.5.14     worker01   <none>           <none>
instance-manager-6e47bda67fc7278dee5cbb280e6a8fde   1/1     Running   0               110s    10.10.19.80    worker03   <none>           <none>
longhorn-csi-plugin-j92qz                           3/3     Running   0               104s    10.10.19.84    worker03   <none>           <none>
longhorn-csi-plugin-lsqzb                           3/3     Running   0               104s    10.10.5.20     worker01   <none>           <none>
longhorn-csi-plugin-nv7vk                           3/3     Running   0               104s    10.10.30.84    worker02   <none>           <none>
longhorn-driver-deployer-576d574c8-vw8hq            1/1     Running   0               4m53s   10.10.30.78    worker02   <none>           <none>
longhorn-manager-2vfpz                              1/1     Running   3 (2m16s ago)   4m53s   10.10.30.77    worker02   <none>           <none>
longhorn-manager-4d5w9                              1/1     Running   2 (2m19s ago)   4m53s   10.10.5.13     worker01   <none>           <none>
longhorn-manager-r55wx                              1/1     Running   3 (4m26s ago)   4m53s   10.10.19.78    worker03   <none>           <none>
longhorn-ui-7cfd57b47d-brh89                        1/1     Running   0               4m53s   10.10.59.198   master02   <none>           <none>
longhorn-ui-7cfd57b47d-qndks                        1/1     Running   0               4m53s   10.10.235.8    master03   <none>           <none>

Tip: If the deployment is abnormal you can delete and rebuild it. If it appears that you cannot delete namespace, you can delete it by doing the following:

wget /longhorn/longhorn/v1.6.0/uninstall/
kubectl apply -f

kubectl get job/longhorn-uninstall -n longhorn-system -w

kubectl delete -f #Wait for the job to complete and execute delete again.

rm -rf /var/lib/longhorn/*

If you still can't free it, refer to "With Troubleshooting Notes".

Dynamic sc creation

After deploying Longhorn, a sc named longhorn has been created by default.

[root@master01 longhorn]# kubectl get sc
NAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)      Delete          Immediate           true                   5m53s

Testing PV and PVC

Tested using a common Nginx Pod to simulate persistent storage volumes for web-like applications common to production environments.


[root@master01 longhorn]# cat <<EOF >
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 50Mi
EOF #establishPVC

[root@master01 longhorn]# cat <<EOF >
---
apiVersion: v1
kind: Pod
metadata:
  name: longhorn-pod
  namespace: default
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: volv
      mountPath: /usr/share/nginx/html
    ports:
    - containerPort: 80
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: longhorn-pvc
EOF #establishPod

[root@master01 longhorn]# kubectl apply -f

[root@master01 longhorn]# kubectl apply -f

[root@master01 longhorn]# kubectl get pods
[root@master01 longhorn]# kubectl get pvc
[root@master01 longhorn]# kubectl get pv

008

Ingress exposes Longhorn

Expose the Longhorn UI using the deployed ingress to facilitate access to the Longhorn GUI for Longhorn base management using URL forms.

[root@master01 longhorn]# yum -y install httpd-tools
[root@master01 longhorn]# htpasswd -c auth admin #Create a username and password
New password: [enter a password]
Re-type new password: [enter a password]

Tip: It can also be created with the following command:
USER=admin; PASSWORD=admin1234; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth

[root@master01 longhorn]# kubectl -n longhorn-system create secret generic longhorn-basic-auth --from-file=auth
[root@master01 longhorn]# cat <<EOF > 
apiVersion: networking./v1
kind: Ingress
metadata:
  name: longhorn-ingress
  namespace: longhorn-system
  annotations:
    /auth-type: basic
    /auth-secret: longhorn-basic-auth
    /auth-realm: 'Authentication Required '
spec:
  ingressClassName: "nginx"
  rules:
  - host: 
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: longhorn-frontend
            port: 
              number: 80
EOF

[root@master01 longhorn]# kubectl apply -f 
[root@master01 longhorn]# kubectl -n longhorn-system get svc longhorn-frontend
[root@master01 longhorn]# kubectl -n longhorn-system get ingress longhorn-ingress
[root@master01 longhorn]# kubectl -n longhorn-system describe svc longhorn-frontend
[root@master01 longhorn]# kubectl -n longhorn-system describe ingress longhorn-ingress

005
006

validate

Browser visit: , and enter the set account and password.

007

Use admin/[password] to log in to view.

008

Helm deployment

Introduction to helm

Helm is a package management tool for Kubernetes. A package manager, like apt in Ubuntu, yum in Centos, or pip in Python, allows you to quickly find, download, and install packages. Typically, each package is called a Chart, and a Chart is a directory (typically the directories are packaged and compressed into a single file in a format that is easy to transfer and store).
Helm consists of a client-side component, helm, and a server-side component, Tiller, that can package and unify a set of K8S resources, making it the best way to find, share, and use software built for Kubernetes.

The Helm Advantage

Deploying a working application in Kubernetes involves a lot of Kubernetes resources working together.
For example, installing a WordPress blog uses a number of Kubernetes resource objects. These include Deployment for deploying the application, Service for service discovery, Secret for configuring WordPress username and password, and possibly pv and pvc for providing persistence services. And WordPress data is stored in mariadb, so you need mariadb to be ready to start before you can start WordPress. these k8s resources are too scattered and not convenient to manage.

Based on the above scenario, deploying an application in k8s usually faces the following issues:
How to centrally manage, configure, and update application resource files for these decentralized k8s;
How to distribute and reuse a set of application templates;
How to manage the application's set of resources as a package.

For application publishers, Helm allows them to package applications, manage application dependencies, manage application versions, and publish applications to the repository.
Instead of writing complex application deployment files, users can find, install, upgrade, rollback, and uninstall applications on Kubernetes in a simple way with Helm.

pre-positioning

Helm will use kubectl to deploy Kubernetes resources on the configured cluster, so the following pre-requisites are required:

  • A running Kubernetes cluster;
  • The pre-configured kubectl client interacts correctly with the Kubernetes cluster.

Binary installation of Helm

A binary installation of helm is recommended.

[root@master01 ~]# mkdir helm
[root@master01 ~]# cd helm/
[root@master01 helm]# wget /helm/v3.15.3/helm-v3.15.
[root@master01 helm]# tar -zxvf helm-v3.15.
[root@master01 helm]# cp linux-amd64/helm /usr/local/bin/
[root@master01 helm]]# helm version #View Installed Version
[root@master01 helm]]# echo 'source <(helm completion bash)' >> $HOME/.bashrc #helmauto-complete

Tip: Refer to the official manual for more installation options:/docs/intro/install/

Helm Operation

Find a chart

helm search: can be used to search two different types of sources.
helm search hub: Search the Helm Hub, a source containing Helm charts from many different repositories.
helm search repo: search for repositories that have been added to the local header helm client (with helm repo add), this search is done with local data and does not require a connection to the public network.

[root@master01 ~]# helm search hub #Searchable All Availablechart
[root@master01 ~]# helm search hub wordpress

Add repo

Similar to adding yum sources to CentOS, you can add relevant sources to the helm repository.

[root@master01 ~]# helm repo list #ferret outrepo
[root@master01 ~]# helm repo add stable /kubernetes/charts
[root@master01 ~]# helm repo add aliyun /charts
[root@master01 ~]# helm repo add jetstack

[root@master01 ~]# helm search repo stable
[root@master01 ~]# helm search repo aliyun #look for sth.repohit the nail on the headchart
[root@master01 ~]# helm repo update #updaterepo(used form a nominal expression)chart

Dashboard Deployment

Introduction to dashboard

The dashboard is a web-based user interface for Kubernetes, known as WebUI.
You can use the dashboard to deploy containerized applications to a Kubernetes cluster, troubleshoot containerized applications, and manage cluster resources.
The dashboard can be used to view applications running on the cluster, as well as to create or modify individual Kubernetes resources (e.g., deployments, tasks, daemons, etc.).
Deployments can be extended using the Deployment Wizard to initiate rolling updates, restart Pods or deploy new applications.
The dashboard also provides information about the status of Kubernetes resources in the cluster and any errors that may have occurred.
It is usually recommended to deploy dashboards in production environments in order to graphically accomplish basic operations and maintenance.

As of version 7.0.0, the community has dropped support for manifest-based installations and now only supports helm-based installations. The original yaml manifest-based installation was not feasible due to the multi-container setup and heavy reliance on the Kong Gateway API proxy.

Also helm based installation for faster deployment and better control of all dependencies needed for Dashboard to run. And have changed the version control scheme and removed appVersion from the Helm chart.
Because, using a multi-container setup, each module is now a separate version, the Helm chart version can now be considered the application version.

Setting up labels

Based on best practices, non-business applications, or the cluster itself, are deployed on the Master node.

[root@master01 ~]# kubectl label nodes master0{1,2,3} dashboard=enable

Tip: It is recommended that for applications related to Kubernetes itself (e.g., dashboard), such non-business applications be deployed on the master node.

Creating Certificates

By default, the dashboard automatically creates a certificate and uses the corresponding certificate to create a secret. production environments can enable the corresponding domain name to deploy the dashboard, so you need to make the domain name for the TLS certificate.
This experiment has acquired a free one-year certificate, free certificate acquisition can be referred to:.
Upload the acquired certificate to the corresponding directory.

[root@master01 ~]# mkdir -p /root/dashboard/certs
[root@master01 ~]# cd /root/dashboard/certs
[root@master01 certs]# mv  
[root@master01 certs]# mv  
[root@master01 certs]# ll
total 8.0K
-rw-r--r-- 1 root root 3.9K Aug  8 16:15 
-rw-r--r-- 1 root root 1.7K Aug  8 16:15 

Tip: You can also create a self-signed certificate manually as follows:

[root@master01 ~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout -out -subj "/C=CN/ST=ZheJiang/L=HangZhou/O=Xianghy/OU=Xianghy/CN="

Creating a secret manually

For scenarios with customized certificates, it is recommended to create the SECRET in advance using the corresponding certificate.

[root@master01 ~]# kubectl create ns kubernetes-dashboard #v3releasesdashboardstand alonens
[root@master01 ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/dashboard/certs/ -n kubernetes-dashboard
[root@master01 ~]# kubectl get secret kubernetes-dashboard-certs -n kubernetes-dashboard -o yaml #View Certificate Information

Add repo

Add the repo repository for kubernetes-dashboard.

[root@master01 ~]# helm repo add kubernetes-dashboard /dashboard/

[root@master01 ~]# helm repo list
NAME                	URL                                     
……           
kubernetes-dashboard	/dashboard/

Edit Configuration

Modify the default start values according to the actual situation. Unconfigured items indicate that the default values are used.

The following yaml does several major customizations:

  • Specifying that the dashboard is deployed on the master node attributes it as a cluster-owned application, not a business application;
  • Specifies the use of its own TLS certificate, and the https ingress domain name;
  • Specifies that the taint can accept master nodes;
  • Specifies that the Pod mounts the local time file so that the Pod is clocked correctly.

kubernetes-dashboard default values referencekubernetes-dashboard values

[root@master01 ~]# cd /root/dashboard/
[root@master01 dashboard]# vi 

app:
  mode: 'dashboard'
  scheduling:
    nodeSelector: {"dashboard": "enable"}
  ingress:
    enabled: true
    hosts:
      # - localhost
      - 
    ingressClassName: nginx
    useDefaultIngressClass: false
    annotations: 
      /ssl-redirect: "true"
    tls:
      enabled: true
      secretName: "kubernetes-dashboard-certs"
  tolerations:
    - key: /control-plane
      effect: NoSchedule

auth:
  nodeSelector: {"dashboard": "enable"}

# API deployment configuration
api:
  scaling:
    replicas: 3
  containers:
    volumeMounts:
      - mountPath: /tmp
        name: tmp-volume
      - mountPath: /etc/localtime
        name: timeconfig
  volumes:
    - name: tmp-volume
      emptyDir: {}
    - name: timeconfig
      hostPath:
        path: /etc/localtime
  nodeSelector: {"dashboard": "enable"}

# WEB UI deployment configuration
web:
  role: web
  scaling:
    replicas: 3
    revisionHistoryLimit: 10
  containers:
    volumeMounts:
      - mountPath: /tmp
        name: tmp-volume
      - mountPath: /etc/localtime
        name: timeconfig
  volumes:
    - name: tmp-volume
      emptyDir: {}
    - name: timeconfig
      hostPath:
        path: /etc/localtime
  nodeSelector: {"dashboard": "enable"}

# Metrics Scraper
metricsScraper:
  scaling:
    replicas: 3
    revisionHistoryLimit: 10
  containers:
    volumeMounts:
      - mountPath: /tmp
        name: tmp-volume
      - mountPath: /etc/localtime
        name: timeconfig
  volumes:
    - name: tmp-volume
      emptyDir: {}
    - name: timeconfig
      hostPath:
        path: /etc/localtime
  nodeSelector: {"dashboard": "enable"}

kong:
  nodeSelector: {"dashboard": "enable"}

Official deployment

Tuning is performed according to production environment best practices and deployment begins after tuning is complete.

[root@master01 dashboard]# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard -f 

[root@master01 dashboard]# helm -n kubernetes-dashboard list
NAME                	NAMESPACE           	REVISION	UPDATED                                	STATUS  	CHART                     	APP VERSION
kubernetes-dashboard	kubernetes-dashboard	1       	2024-08-14 19:39:29.034973262 +0800 CST	deployed	kubernetes-dashboard-7.5.0

[root@master01 dashboard]# kubectl -n kubernetes-dashboard get all
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get 
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get services
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get pods -o wide
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get svc
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get ingress -o wide

009
010
011

Create an administrator account

It is recommended to create an administrator account, dashboard does not create accounts with administrator privileges by default, also the v7 version of login only supports the token method.
It is therefore recommended to create a user with administrator rights, then create a token for this user and then use this token to log in.

[root@master01 dashboard]# cat <<EOF > 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: ./v1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: .
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard
  
---
apiVersion: v1
kind: Secret
type: /service-account-token
metadata:
  name: admin
  namespace: kubernetes-dashboard
  annotations:
    /: "admin"
EOF

[root@master01 dashboard]# kubectl apply -f 

View Token

Using tokens is relatively complex; tokens can be added to a kubeconfig file and the dashboard can be accessed using a KubeConfig file.

[root@master01 dashboard]# ADMIN_SECRET=$(kubectl -n kubernetes-dashboard get secret | grep admin | awk '{print $1}')
[root@master01 dashboard]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kubernetes-dashboard ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
[root@master01 dashboard]# echo ${DASHBOARD_LOGIN_TOKEN}

Tip: You can also get the token for the secret whose name is admin by doing the following.
kubectl -n kubernetes-dashboard get secret admin -o jsonpath={"."} | base64 -d

Import the certificate file so that the browser can log in using it.

Importing Certificates

Importing the certificate into your browser and setting it as trusted will circumvent the certificate untrusted popup.

Test access to the dashboard

This experiment uses the domain name exposed by ingress:
Accessed using the token corresponding to the admin user.

012

The default namespace you enter after logging in is default, and you can switch to other corresponding namespaces to manage and view the entire Kubernetes.

013

Tip: For more dashboard access methods and authentication seeWith Dashboard Introduction and Usage
The whole process of dashboard login can be referred:/post/

Scaling Up: Cluster Expansion and Downsizing

Cluster Expansion

  • Master Node Expansion
    Reference: Adding a Master Node Procedure
  • Worker Node Expansion
    Reference: Adding a Worker Node Procedure

cluster downsizing

  • master node downsizing
    Pods are automatically migrated to other nodes when the Master node is downsized.
[root@master01 ~]# kubectl drain master03 --delete-emptydir-data --force --ignore-daemonsets
[root@master01 ~]# kubectl delete node master03
[root@master03 ~]# kubeadm reset -f && rm -rf $HOME/.kube
  • Worker Node Downsizing
    Pods are automatically migrated to other nodes when the Worker node is downsized.
[root@master01 ~]# kubectl drain worker04 --delete-emptydir-data --force --ignore-daemonsets
[root@master01 ~]# kubectl delete node worker04
[root@worker04 ~]# kubeadm reset -f
[root@worker04 ~]# rm -rf /etc/kubernetes/ /etc/kubernetes/ /etc/kubernetes/ /etc/kubernetes/ /etc/kubernetes/