Location>code7788 >text

Build a CI/CD system with a server bought at 99 yuan

Popularity:296 ℃/2025-04-01 22:38:16

The story begins like this: when I was bored, I bought a 99/year service on Alibaba Cloud, deployed a Git service on it to host some code I wrote when I was bored, and also used it as a development server. In order to facilitate application management, docker was used to manage and deploy applications at first, but later upgraded it and used docker-compose. After all, declarative deployment is more scientific than hand-knitting commands, and docker-compose's management of dependency projects is even more popular. So, this harmonious took a long time until I got a test server on Tencent Cloud when I was bored not long ago. Although it is a trial version, it should be used, otherwise how can I experience it? After thinking about it, I feel that I should form a cluster and get a set of CI/CD. This will not only greatly improve the coding happiness, but also have a decent understanding of K8s.

K3S

K3s is a lightweight, fully compatible Kubernetes distribution. K3s is easy to install and requires only half of Kubernetes memory, and all components are in a binary file less than 100 MB (~70MB). Officials say that the Kubernetes you want to install only takes up half of the memory. Kubernetes is a 10-letter word, they only use 5, so it is called K3s.

K3s is very light and suitable for 2-core and 2GB cloud server installation. Although the program is only ~70MB, it actually contains two applications (same as Master-Slave). Not only that, K3s packages the required dependencies, including containerd, Flannel, Traefik, Service LB, and more. It looks quite excellent, but in fact it still has some barriers to hand. Due to the lack of basic knowledge, I have been familiar with the kubectl tool for two days. With Flannel and LoadBalancer, the server has been reinstalled for several rounds. Fortunately, all questions have a standard answer. I have been growing up all the way through pitfalls and finally got K3s running.

Harness

It used to be called gitness, and they had a lot of things, such as Drone. Now that it has become stronger and bigger, gitness has also been renamed Harness, which consumes very little resources. Harness comes with pipeline function, which is also my favorite method. The CI configuration is stored directly in the project. Its service is written in Go, ui is react, and it uses webpack for packaging. Perhaps the network bandwidth in their place is large enough. It does not compress the package (like the more familiar gzip), and the service does not compress content. The loading speed is really touching when deploying on a server with only 3M bandwidth, so it can only manually compress gzip at the proxy layer (and add http cache).

ArgoCD

Argo CD is a declarative GitOps continuous delivery tool for Kubernetes. The definition, configuration, and environment of the application should be declarative and version-controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand. This is the official introduction, simply put, it can:

  1. Pull Kubernetes configuration from Git repository and automatically sync to the cluster.
  2. Provides a visual UI, supports application management, version rollback and other operations.
  3. Declarative management, all changes are driven by Git to ensure deployment consistency.

In my architecture, ArgoCD is responsible for keeping a close eye on the Git repository, and once an update is detected, the latest version will be automatically deployed. If there is a problem, just click Rollback in the UI and it will be done in a few seconds, which will be elegant and efficient.

CI/CD process effect

CI stage (Harness Pipeline)
After the code pushes to the Git repository, Harness triggers Pipeline:

  1. Checkout code
  2. Install Dependencies & Build Applications
  3. Building Docker Images
  4. Upload the mirror to Alibaba Cloud Mirror Repository (personal version is free)
  5. The mirror version number in the modified.

CI ends here, and the next step is left to ArgoCD.

CD stage (ArgoCD deployment)

  1. Automatically pull the latest configuration (stored in the Git repository).
  2. Automatically synchronize or manually synchronize application deployment.

Finally, a complete closed loop of code submission → automatic construction → automatic deployment is realized. The entire process does not require manual intervention and the update is fully automatic!

Pit prevention guide

The most pitfall is the illusion problem of AI big models. At present, AI has become a necessity for life. It can quickly analyze problems and provide solutions. It is precisely because of this that he led the world around him. After going around, he still returned to the starting point. Let’s take a look at the pits that AI took me to skip this time.

1. Disable Traefik
When installing K3s, it lets me directly disable Traefik and then use Nginx or Caddy as a reverse proxy. I just happened to be familiar with Caddy, and I jumped without regrets. To be honest, Caddy configuration is really simple, especially when it comes to enabling Let's Encrypt and gzip compression, so I decided to choose Caddy. At first, I used it very convenient when I manually deployed the project. Until I installed ArgoCD and started automatic deployment, I suddenly realized the problem. Although the program was automatically deployed, what should I do if the domain name binding is done? Do you want to go to the server again? Why! Only then did I realize that Ingress Controller is irreplaceable. However, this journey has not only improved my understanding of Ingress Controller, but also made a lot of LoadBalancer knowledge due to problems caused by Flannel configuration (AI asked me to disable the serviceLB I installed and installed MetalLB myself, but I honestly followed).

2. Harness Pipeline configuration
This is actually a bit of a blame, after all, I chose the tool. When it comes to building docker images using the Pipeline plugin, the code given by Harness's own plugin documentation does not apply to Harness Pipeline. But it was indeed the configuration file that AI made up for me, and it was so difficult that it almost gave up. Fortunately, I modified it based on the pipeline example configuration and the official document, and finally successfully got away with it.

at last

From the initial 99 yuan server to the construction of a complete K3s + CI/CD system, I have taken a lot of pitfalls, but I have gained a lot. The learning threshold for Kubernetes is indeed not low, but as long as you are willing to toss, you can always find the answer.
If you are interested in K3s, you can check out my detailed notes:https:///docs/k3s/intro/