preamble
Before the stand-alone test k8s kind recently failed, the virtual machine to run a few minutes after the downtime, I do not know what is the root cause, and kind of deployment of k8s is not very good to do some personalized configuration, simply use the binary way to re-build a stand-alone k8s.
Because it is used for development testing, control-panel is not made highly available, etcd+apiserver+controller-manager+scheduler all have only one instance.
Environmental information:
- Host: Debian 12.7, 4-core CPU, 4GB RAM, 30GB storage (a 2C2G configuration would be sufficient if just deploying a k8s)
- Container Runtime: containerd v1.7.22
- etcd: v3.4.34
- kubernetes:v1.30.5
- cni: calico v3.25.0
Most of the configuration files in this article have been uploaded to thegitee - k8s-noteThe directory is "install k8s/binary standalone deployment k8s-v1.30.5", you can directly clone repo if needed.
intend
Most of the commands in this section require root privileges. If you are prompted for insufficient privileges when executing a command, you can switch the root user yourself or use thesudo
。
Adjusting Host Parameters
- Modify the hostname. kubernetes requires a different hostname for each node
hostnamectl set-hostname k8s-node1
- modifications
/etc/hosts
File. If the intranet has a self-built DNS you can ignore the
192.168.0.31 k8s-node1
- Install a time synchronization service. If there are multiple hosts, be careful to synchronize the time between hosts. If there is a time synchronization server on the intranet, you can modify the configuration of chrony to point to the intranet time synchronization server
sudo apt install -y chrony
sudo systemctl start chrony
- By default, k8s cannot run on hosts that use swap. The temporary shutdown command is used here, and the curing configuration needs to be modified.
/etc/fstab
file, remove or comment out swap-related configuration lines.
sudo swapoff -a
- Load the kernel module. If this step is not done, the next step of configuring system parameters will report an error.
# 1. Add configuration
cat <<EOF > /etc//
overlay
br_netfilter
EOF
# 2. Load Now
modprobe overlay
modprobe br_netfilter
# 3. Check the loading. If there is no output then it was not loaded successfully.
lsmod | grep br_netfilter
- Configure system parameters. Mainly
-nf-call-ip6tables
、-nf-call-iptables
cap (a poem)net.ipv4.ip_forward
These three parameters, other parameters can be modified as appropriate.
# 1. Adding Configuration Files
cat << EOF > /etc//
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
= 0
EOF
# 2. Configuration takes effect
sysctl -p /etc//
- Enable ipvs. Write a systemd configuration file to enable automatic boot load into the kernel. Try to use ipvs if you can install it, it helps to improve the load balancing performance of the cluster. See also:/zh-cn/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
# 1. Install dependencies
apt install -y ipset ipvsadm
# 2. Load it now
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
# 3. Potting to the configuration file
cat << EOF > /etc//
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_wrr
nf_conntrack
EOF
# 4. Check if it is loaded
lsmod |grep ip_vs
Installing containerd
k8s no longer directly supports docker as a container runtime after version 1.24, so this article uses a binary installer that uses containerd. The binary installer can be downloaded from theGitHub - containerdDownload, be careful to download the cri-containerd-cni version of the
- Unzip to the root directory. The files inside the zip are organized according to the root directory structure, so you should extract them directly to the root directory.
tar xf cri-containerd-cni-1.7. -C /
- Creating the configuration file directory and generating the default configuration file
mkdir /etc/containerd
containerd config default > /etc/containerd/
- Edit Configuration File
/etc/containerd/
, amend the following
# For linux distributions that use systemd as the init system, it is officially recommended to use systemd as the container cgroup driver.
# Change false to true
SystemdCgroup = true
# pause the image address to be changed to the address of the image you uploaded in AliCloud. If it is an intranet environment, you can change it to the address of the intranet registry
sandbox_image = "/rainux/pause:3.9"
- Start containerd
systemctl start containerd
systemctl enable containerd
- Run the command to test if the containerd is working. If you don't get any errors, it's usually fine.
crictl images
Generate ca certificate
The later k8s and etcd clusters use CA certificates. If the organization can provide a unified CA authentication center, you can directly use the CA certificate issued by the organization. If there is no unified CA certification center, you can issue a self-signed CA certificate to complete the security configuration. Generate a ca certificate here.
# Generate private key file
openssl genrsa -out 2048
# Generate the root certificate file from the private key file
# /CN is the hostname or IP address of the master.
# days is the validity period of the certificate
openssl req -x509 -new -nodes -key -subj "/CN=k8s-node1" -days 36500 -out
# Copy the ca certificate to /etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki
cp /etc/kubernetes/pki/
Installing etcd
The etcd installation package can be downloaded from the official website and unzipped after downloading. You can extract theetcd
cap (a poem)etcdctl
put into an environment variablePATH
in the catalog.
- Edit file
etcd_ssl.cnf
The IP address is the etcd node.
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 192.168.0.31
- Creating an etcd server-side certificate
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA -CAkey -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
- Creating an etcd client certificate
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA -CAkey -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt
- Edit the configuration file of etcd. Directories, file paths, IP, port, and other information are modified as appropriate
ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/rainux/apps/etcd/data
ETCD_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_TRUSTED_CA_FILE=/home/rainux/apps/certs/
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.0.31:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.0.31:2379
ETCD_PEER_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/home/rainux/apps/certs/
ETCD_LISTEN_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.31:2380"
ETCD_INITIAL_CLUSTER_STATE=new
- compiler
/etc/systemd/system/
Note that the paths to the configuration file and the etcd binary file should be modified according to the actual situation.
[Unit]
Description=etcd key-value store
Documentation=/etcd-io/etcd
After=
[Service]
User=rainux
EnvironmentFile=/home/rainux/apps/etcd/conf/
ExecStart=/home/rainux/apps/etcd/etcd
Restart=on-failure
[Install]
WantedBy=
- Starting etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
# Check the status of the service
systemctl status etcd
- Verify etcd status using the etcd client
etcdctl --cacert=/etc/kubernetes/pki/ --cert=$HOME/apps/certs/etcd_client.crt --key=$HOME/apps/certs/etcd_client.key --endpoints=https://192.168.0.31:2379 endpoint health
# Normally there will be an output similar to the following
https://192.168.0.31:2379 is healthy: successfully committed proposal: took = 13.705325ms
Install control-panel
The k8s binary installer can be downloaded from github:/kubernetes/kubernetes/releases
Find the download link for the binary package in changelog and download the server binary, which contains the master and node binaries.
Unzip it and move the binaries from it to the/usr/local/bin
catalogs
Install apiserver
The core function of apiserver is to provide HTTP REST interfaces such as add, delete, change, check and watch of all kinds of resource objects of k8s, and to become the central hub of data interaction and communication between various functional modules in the cluster, which is the data bus and data center of the whole system. In addition, it is also the API entry point for cluster management, the entry point for resource quota control, and provides a complete cluster security mechanism.
- Edit master_ssl.cnf. DNS.5 is the hostname of the three servers. set it separately
/etc/hosts
IP.1 is the Cluster IP address of the Master Service virtual service, and IP.2 is the server IP of the apiserver.
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 =
DNS.3 =
DNS.4 =
DNS.5 = k8s-node1
IP.1 = 169.169.0.1
IP.2 = 192.168.0.31
- Generate ssl certificate file
openssl genrsa -out 2048
openssl req -new -key -config master_ssl.cnf -subj "/CN=k8s-node1" -out
openssl x509 -req -in -CA -CAkey -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out
- Use cfssl to create and. cfssl and cfssljson can be downloaded from theGitHub - cfssldownloading
cat<<EOF >
{
"CN":"sa",
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"k8s",
"OU":"System"
}
]
}
EOF
cfssl gencert -initca | cfssljson -bare sa -
openssl x509 -in -pubkey -noout >
- Edit the kube-apiserver configuration file, note that the file path and etcd address should be changed according to the actual situation.
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/home/rainux/apps/certs/ \
--tls-private-key-file=/home/rainux/apps/certs/ \
--client-ca-file=/home/rainux/apps/certs/ \
--service-account-issuer= \
--service-account-key-file=/home/rainux/apps/certs/ \
--service-account-signing-key-file=/home/rainux/apps/certs/ \
--apiserver-count=1 \
--endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.0.31:2379 \
--etcd-cafile=/home/rainux/apps/certs/ \
--etcd-certfile=/home/rainux/apps/certs/etcd_client.crt \
--etcd-keyfile=/home/rainux/apps/certs/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--audit-log-maxsize=100 \
--audit-log-maxage=15 \
--audit-log-path=/home/rainux/apps/kubernetes/logs/ --v=2"
- Edit the service file.
/etc/systemd/system/
[Unit]
Description=Kubernetes API Server
Documentation=/kubernetes/kubernetes
After=
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
[Install]
WantedBy=
- Start apiserver
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
# Check the status of the service
systemctl status kube-apiserver
- Generating client certificates
openssl genrsa -out 2048
# The name /CN is used to identify the name of the client user connecting to the apiserver
openssl req -new -key -subj "/CN=admin" -out
openssl x509 -req -in -CA -CAkey -CAcreateserial -out -days 36500
- Create the kubeconfig configuration file required for clients to connect to apiserver. Where server is the nginx listening address. Note that the configuration is modified according to the actual configuration. This kubeconfig configuration file can also be used for kubectl, so the development environment can be directly set to the file path to the
$HOME/.kube/config
apiVersion: v1
kind: Config
clusters:
- name: default
cluster:
server: https://192.168.0.31:6443
certificate-authority: /home/rainux/apps/certs/
users:
- name: admin
user:
client-certificate: /home/rainux/apps/certs/
client-key: /home/rainux/apps/certs/
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
Installing kube-controller-manager
The controller-manager monitors the state changes of specific resources in the cluster in real time through the interface provided by apiserver. When a resource object does not meet the expected state, the controller-manager tries to adjust its state to the desired state.
- Edit Configuration File
/home/rainux/apps/kubernetes/conf/
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/home/rainux/apps/certs/ \
--root-ca-file=/home/rainux/apps/certs/ \
--v=0"
- Edit the service file
/etc/systemd/system/
[Unit]
Description=Kubernetes Controller Manager
Documentation=/kubernetes/kubernetes
After=
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
[Install]
WantedBy=
- Starting kube-controller-manager
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
Install kube-scheduler
- Edit Configuration File
/home/rainux/apps/kubernetes/conf/
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--v=0"
- Edit the service file
/etc/systemd/system/
[Unit]
Description=Kubernetes Scheduler
Documentation=/kubernetes/kubernetes
After=
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
[Install]
WantedBy=
- activate (a plan)
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
Installing the worker node
Installing kubelet
- Edit the file /home/rainux/apps/kubernetes/conf/. Note that modifying the
hostname-override
cap (a poem)kubeconfig
。
KUBELET_ARGS="--kubeconfig=/home/rainux/.kube/config \
--config=/home/rainux/apps/kubernetes/conf/ \
--hostname-override=k8s-node1 \
--v=0 \
--container-runtime-endpoint="unix:///run/containerd/"
- compiler
/home/rainux/apps/kubernetes/conf/
Documentation.
kind: KubeletConfiguration
apiVersion: . /v1beta1
address: 0.0.0.0 # Service listening address
port: 10250 # Service listening port number
cgroupDriver: systemd # cgroup driver, default is cgroupfs, systemd is recommended.
clusterDNS: ["169.169.0.100"] # cluster DNS address
clusterDomain: # Service DNS domain name suffix.
authentication: # Whether to allow anonymous access or use webhook authentication.
authentication: # whether to allow anonymous access or use webhook authentication
enabled: true
- Edit the service file /etc/systemd/system/
[Unit]
Description=Kubernetes Kubelet Server
Documentation=/kubernetes/kubernetes
After=
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=
- Starting a kubelet
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
Install kube-proxy
- Edit Configuration File
/home/rainux/apps/kubernetes/conf/
。proxy-mode
The default parameter is iptables, if ipvs is installed, it is recommended to change it toipvs
KUBE_PROXY_ARGS="--kubeconfig=/home/rainux/.kube/config \
--hostname-override=k8s-node1 \
--proxy-mode=ipvs \
--v=0"
- Edit the service file
/etc/systemd/system/
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=/kubernetes/kubernetes
After=
[Service]
EnvironmentFile=/home/rainux/apps/kubernetes/conf/
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
[Install]
WantedBy=
- Start kube-proxy
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
Install calico
- Download calico configuration file
wget /manifests/
- If you can access docker hub normally, you can directly use the configuration file to create the calico resource object, otherwise you need to modify the image address in it. If the calico version used is also 3.25.0, you can use the mirror I uploaded in AliCloud.
image: /rainux/calico:cni-v3.25.0
image: /rainux/calico:node-v3.25.0
image: /rainux/calico:kube-controllers-v3.25.0
- Perform the installation
kubectl create -f
- Check to see if calico's pods are running properly. If it's working, the status should be running; if it's not, you need to describe the pod to see what's wrong with it
kubectl get pods -A
Installing CoreDNS
- Edit the deployment file . Note that the service specifies the clusterIP, and the mirror address is changed to the one I uploaded in AliCloud.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
/mode: EnsureExists
data:
Corefile: |
{
errors
health {
lameduck 5s
}
ready
kubernetes 169.169.0.0/16 {
fallthrough
}
prometheus :9153
forward . /etc/
cache 30
loop
reload
loadbalance
}
. {
cache 30
loadbalance
forward . /etc/
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
/name: "CoreDNS"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: /hostname
containers:
- name: coredns
image: /rainux/coredns:1.11.3
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
/port: "9153"
/scrape: "true"
labels:
k8s-app: kube-dns
/cluster-service: "true"
/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 169.169.0.100
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
- Create coredns service
kubectl create -f
- In the repo I put a
It is used to test if the dns is working. After creating this test object, install nslookup in a debian pod and test if it can resolve the svc-nginx
# Create a pod for testing dns
kubectl create -f
# Install nslookup and curl on the debian pods
apt update -y
apt install -y dnsutils curl
# Use nslookup and curl to test if you can request the nginx service via domain name
nslookup svc-nginx
curl http://svc-nginx
Install metrics-server
In the new version of k8s, both the collection of system resources and the HPA function require the use of the metrics-server
- Edit the configuration file. Note the mirror address
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: ./v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
./aggregate-to-admin: "true"
./aggregate-to-edit: "true"
./aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: ./v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: ./v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: .
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: ./v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: .
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: ./v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: .
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls # Add this line parameter to use self-signed certificates
image: /rainux/metrics-server:v0.7.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 10250
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration./v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: .
spec:
group: metrics.
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
- Creating Related Resource Objects
kubectl create -f
- Execute the relevant commands to test whether the installation is normal
kubectl top node
kubectl top pod
wrap-up
In accordance with the above steps after the completion of the implementation of a stand-alone k8s for development and testing is built, and increase the node is also relatively convenient, while the binary deployment method is also easy to modify the cluster parameters.