idle gossip
Considered for a long time, I intend to write a nanny deployment from 0-1 to build an enterprise cicd pipeline, to share the technical points used on top of the work. From the most k8s, harbor, jenkins, gitlab, docker detailed deployment to integration. Front and back-end pipeline builds, releases, etc... If there is any deficiency in the following content, please point it out and I will correct it at the first time. Thank you all.
Let's start with a hand-drawn guide, the rough flowchart is below:
Roughly the deployment process is like this: developers do a good job of core project code through git push to gitlab, then Jenkins through gitlab webhook (provided that the configuration is good), automatically from the pull gitlab above to pull down the code, and then carry out the build, compile, generate images, and then push the image to the Harbor repository. Then in the deployment time through the k8s pull Harbor above the code for the creation of containers and services, and finally released, and then you can use the external network access.
Of course, the above is only rough, please see the following picture to be more graphic.
I. Preface
There are many ways to deploy a K8s cluster. kubeadm is the official cluster deployment tool provided by K8s, and this method is the most commonly used, simple and fast, and suitable for beginners. In this article, we will use kubeadm to build a cluster demo.
II. Host Preparation
This time we build a 3-node K8s cluster with Ubuntu 22.04.4 LTS, 2 cores and 4G, with the following ip plan
hostname (of a networked computer) | ip address | Host Configuration |
---|---|---|
master231 | 10.0.0.231 | 2-core, 4GiB, system disk 20GiB |
worker232 | 10.0.0.232 | 2-core, 4GiB, system disk 20GiB |
worker233 | 10.0.0.233 | 2-core, 4GiB, system disk 20GiB |
III. System configuration
Close the swap partition
swapoff -a && sysctl -w =0 # temporary shutdown
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # Shutdown based on config file
Ensure that individual node MAC addresses or product_uuid are unique
ifconfig ens33 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid
Tip.
In general, hardware devices will have unique addresses, but some VMs may have duplicate addresses.
Kubernetes uses these values to uniquely identify nodes in the cluster. If these values are not unique on each node, the installation may fail.
Check network nodes for interoperability
The short answer is to check if the nodes of your k8s cluster are interoperable, which can be tested using the ping command.
ping -c 10
ping master231 -c 10
Allow iptable to inspect bridged traffic
cat <<EOF | tee /etc//
br_netfilter
EOF
cat <<EOF | tee /etc//
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
Modify the cgroup's management process
All nodes modify cgroup's management process to systemd == Ubuntu defaults to no modifications
[root@master231 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@master231 ~]#
[root@worker232 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@worker232 ~]#
[root@worker233 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd
[root@worker233 ~]#
Tip.
If you don't change the cgroup driver to systemd, it will default to cgroupfs and fail to initialize the master node.
Fourth, the installation of k8s management tools
kubeadm: tool used to initialize a K8S cluster.
kubelet: used on each node in the cluster to start Pods, containers, etc.
kubectl: Command line tool used to communicate with the K8S cluster.
All Node Operations
1.K8SAll nodes configure software sources
apt-get update && apt-get install -y apt-transport-https
curl /kubernetes/apt/doc/ | apt-key add -
cat <<EOF >/etc/apt//
deb /kubernetes/apt/ kubernetes-xenial main
EOF
2.Get the latest package information
apt-get update
3.Check to see what the current environment supportsk8sreleases
[root@master231 ~]# apt-cache madison kubeadm
kubeadm | 1.28.2-00 | /kubernetes/apt kubernetes-xenial/main amd64 packages
kubeadm | 1.28.1-00 | /kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.0-00 | /kubernetes/apt kubernetes-xenial/main amd64 Packages
4.mounting kubelet kubeadm kubectl
apt-get -y install kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00
5.所有节点都要检查各组件releases
kubeadm version
kubectl version
kubelet --version
Installing docker
1.compiledockerinstallation script
[root@master231 docker]# cat
#!/bin/bash
# auther: cherry
# Loading operating system variables,stapleIDvariant。
. /etc/os-release
# DOCKER_VERSION=26.1.1
DOCKER_VERSION=20.10.24
# DOCKER_COMPOSE_VERSION=2.27.0
DOCKER_COMPOSE_VERSION=2.23.0
FILENAME=docker-${DOCKER_VERSION}.tgz
DOCKER_COMPOSE_FILE=docker-compose-v${DOCKER_COMPOSE_VERSION}
URL=/linux/static/stable/x86_64
DOCKER_COMPOSE_URL=/docker/compose/releases/download/v${DOCKER_COMPOSE_VERSION}/docker-compose-linux-x86_64
DOWNLOAD=./download
BASE_DIR=/softwares
OS_VERSION=$ID
# Determine if you have downloaded thedocker-compose
function prepare(){
# Determine whether to downloaddocker-composefile
if [ ! -f ${DOWNLOAD}/${DOCKER_COMPOSE_FILE} ]; then
wget -T 3 -t 2 ${DOCKER_COMPOSE_URL} -O ${DOWNLOAD}/${DOCKER_COMPOSE_FILE}
fi
if [ $? != 0 ];then
rm -f ${DOWNLOAD}/${DOCKER_COMPOSE_FILE}
echo "be sorry (for inconveniencing sb),Due to network fluctuations,Unable to download${DOCKER_COMPOSE_URL}software package,The program has exited.!Please try again later......."
exit 100
fi
# Adding Execute Permissions to a Script
chmod +x ${DOWNLOAD}/${DOCKER_COMPOSE_FILE}
}
# Defining Installation Functions
function InstallDocker(){
if [ $OS_VERSION == "centos" ];then
[ -f /usr/bin/wget ] || yum -y install wget
rpm -qa |grep bash-completion || yum -y install bash-completion
fi
if [ $OS_VERSION == "ubuntu" ];then
[ -f /usr/bin/wget ] || apt -y install wget
fi
# 判断file是否存在,若不存在则下载software package
if [ ! -f ${DOWNLOAD}/${FILENAME} ]; then
wget ${URL}/${FILENAME} -O ${DOWNLOAD}/${FILENAME}
fi
# Determine if the installation path exists
if [ ! -d ${BASE_DIR} ]; then
install -d ${BASE_DIR}
fi
# 解压software package到mounting目录
tar xf ${DOWNLOAD}/${FILENAME} -C ${BASE_DIR}
# mountingdocker-compose
prepare
cp $DOWNLOAD/${DOCKER_COMPOSE_FILE} ${BASE_DIR}/docker/docker-compose
# Creating a soft connection
ln -svf ${BASE_DIR}/docker/* /usr/bin/
# auto-completion function
cp $DOWNLOAD/docker /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/completions/docker
# Configuring Mirror Acceleration
install -d /etc/docker
cp $DOWNLOAD/ /etc/docker/
# Boot-up scripts
cp download/ /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable --now docker
docker version
docker-compose version
tput setaf 3
echo "mounting成功,Welcomecherrybinary system (math)dockerinstallation script,Welcome to the next use!"
tput setaf 2
}
# uninstallationdocker
function UninstallDocker(){
# cessationdockerservice
systemctl disable --now docker
# uninstallation启动脚本
rm -f /usr/lib/systemd/system/
# Empty the program directory
rm -rf ${BASE_DIR}/docker
# Empty Data Catalog
rm -rf /var/lib/{docker,containerd}
# Clear symbolic links
rm -f /usr/bin/{containerd,containerd-shim,containerd-shim-runc-v2,ctr,docker,dockerd,docker-init,docker-proxy,runc}
# Make the terminal pink.
tput setaf 5
echo "uninstallation成功,Feel free to use it againcherrybinary system (math)dockerinstallation script哟~"
tput setaf 7
}
# Entry function of the program
function main(){
# Determine the parameters passed
case $1 in
install|i)
InstallDocker
;;
remove|r)
UninstallDocker
;;
*)
echo "Invalid parameter, Usage: $0 install|remove"
;;
esac
}
# Passing parameters to an entry function
main $1
[root@master231 docker]# ll
total 16
drwxr-xr-x 3 root root 4096 Dec 10 05:49 ./
drwx------ 6 root root 4096 Dec 10 05:49 ../
drwxr-xr-x 2 root root 4096 May 9 2024 download/
-rwxr-xr-x 1 root root 3497 Dec 10 05:49 *
2.mountingdocker
[root@master231 docker]# ./ install
'/usr/bin/containerd' -> '/softwares/docker/containerd'
'/usr/bin/containerd-shim' -> '/softwares/docker/containerd-shim'
'/usr/bin/containerd-shim-runc-v2' -> '/softwares/docker/containerd-shim-runc-v2'
'/usr/bin/ctr' -> '/softwares/docker/ctr'
'/usr/bin/docker' -> '/softwares/docker/docker'
'/usr/bin/docker-compose' -> '/softwares/docker/docker-compose'
'/usr/bin/dockerd' -> '/softwares/docker/dockerd'
'/usr/bin/docker-init' -> '/softwares/docker/docker-init'
'/usr/bin/docker-proxy' -> '/softwares/docker/docker-proxy'
'/usr/bin/runc' -> '/softwares/docker/runc'
Created symlink /etc/systemd/system// → /lib/systemd/system/.
Client:
Version: 20.10.24
API version: 1.41
Go version: go1.19.7
Git commit: 297e128
Built: Tue Apr 4 18:17:06 2023
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.24
API version: 1.41 (minimum version 1.12)
Go version: go1.19.7
Git commit: 5d6db84
Built: Tue Apr 4 18:23:02 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.6.20
GitCommit: 2806fc1057397dbaeefbea0e4e17bddfbd388f38
runc:
Version: 1.1.5
GitCommit: v1.1.5-0-gf19387a6
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Docker Compose version v2.23.0
mounting成功,Welcomecherrybinary system (math)dockerinstallation script,Welcome to the next use!
Initializing the master component
1. Import the image
[root@master231 ~]# docker load -i master-1.23.
2. Initialize the master node with kubeadm
[root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository /google_containers --pod-network-cidr=10.100.0.0/16 -- service-cidr=10.200.0.0/16 --service-dns-domain=
# This should be remembered, to add a node you need to use
kubeadm join 10.0.0.231:6443 ---token lzphw7.kc4iu4k0mswnpy7h \
--discovery-token-ca-cert-hash sha256:298393d4dc931d6d13ec2ec1aedd4295bcd143a84e78dfc5a82ec7e53210d511
--------------------------------------------------
3. Copy the license file for managing the K8S cluster
[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/ $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
4. Check the cluster nodes
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+.
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true", "reason":""}
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 117s v1.23.1
Deploy worler component, add node
1.Importing Images
[root@worker232 ~]# docker load -i slave-1.23.
[root@worker233 ~]# docker load -i slave-1.23.
2.existworkerThe node inputs the justtoken
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token lzphw7.kc4iu4k0mswnpy7h \
--discovery-token-ca-cert-hash sha256:298393d4dc931d6d13ec2ec1aedd4295bcd143a84e78dfc5a82ec7e53210d511
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token lzphw7.kc4iu4k0mswnpy7h \
--discovery-token-ca-cert-hash sha256:298393d4dc931d6d13ec2ec1aedd4295bcd143a84e78dfc5a82ec7e53210d511
The node checks the cluster'sworkernode list
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 13m v1.23.17
worker232 NotReady <none> 3m19s v1.23.17
worker233 NotReady <none> 2m3s v1.23.17
Deploying CNI plug-ins to open up the network
1. Import the image
[root@master231 ~]# docker load -i flannel-cni-plugin-v1.5.
[root@master232 ~]# docker load -i flannel-cni-plugin-v1.5.
[root@master233 ~]# docker load -i flannel-cni-plugin-v1.5.
[root@worker231 ~]# docker load -i
[root@worker232 ~]# docker load -i
[root@worker233 ~]# docker load -i
2. Download the Flannel component
[root@master231 ~]# wget /coreos/flannel/master/Documentation/
3. Install the Flannel component
[root@master231 ~]# kubectl apply -f
4. Check the version, it should be the same or it will fail to start.
[root@master231 ~]# grep image
5. Check if the components of falnnel are installed successfully.
[root@master231 ~]# kubectl get pods -o wide -n kube-flannel
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-ckkbk 1/1 Running 0 35s 10.0.0.233 worker233 <none> <none>
kube-flannel-ds-kst7g 1/1 Running 0 35s 10.0.0.232 worker232 <none> <none>
kube-flannel-ds-ljktm 1/1 Running 0 35s 10.0.0.231 master231 <none> <none>
6. Test each node component
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 37m v1.23.17
worker232 Ready <none> 27m v1.23.17
worker233 Ready <none> 26m v1.23.17
Fifth, the installation of kubectl tool auto-completion function
1.Provisional retroactive entry into force
apt -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
2.Environment variables need to be written for permanent completion to take effect.
[root@master231 ~]# vim .bashrc
...
source <(kubectl completion bash)
VI. Modification of time zones
1. Modify the time zone
[root@master231 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
2. Verify
[root@master231 ~]# date -R
VII. k8s basic inspection
1. Check the list of worker nodes in the K8S cluster
[root@master231 ~]# kubectl get nodes
2. Check the master component
[root@master231 ~]# kubectl get cs
3. Check if the flannel card is working properly
[root@master231 ~]# kubectl get pods -o wide -n kube-flannel
4. Check each node's NIC
ifconfig
- If a node does not have a cni0 card, we recommend that you manually create the appropriate bridge device, but note that the network segments should be the same.
1. Assume that the master231's flannel.1 is the 10.100.0.0 segment.
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0
2. Assume that the flannel.1 of worker232 is the 10.100.1.0 network segment.
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.1.1/24 dev cni0
3. Assume that worker233's flannel.1 is the 10.100.2.0 segment.
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.2.1/24 dev cni0