Location>code7788 >text

How can low-code implement high concurrency support with K8s?

Popularity:275 ℃/2024-08-01 09:54:35

introductory

In today's digital era, the popularity of the Internet and the rapid development of technology make applications face unprecedented challenges, the most significant of which is the demand for highly concurrent access. With the surge in the number of users and the expansion of business scale, how to ensure that the application can still run stably and respond quickly under high concurrency scenarios has become an important issue that all developers and technical teams must face.

Kubernetes (K8s), as one of the iconic technologies of the cloud-native era, has shown great potential in solving the problem of high concurrency by virtue of its powerful container orchestration capabilities, automated deployment, and scalability.K8s provides a set of highly efficient, flexible, and scalable container management solutions by abstracting the underlying resources, which allows developers to focus more on the implementation of business logic without having to pay too much attention to the complexity of the underlying infrastructure and the operational and maintenance challenges. K8s provides a set of efficient, flexible and scalable container management solutions by abstracting the underlying resources, enabling developers to focus more on the implementation of business logic without paying too much attention to the complexity of the underlying infrastructure and operation and maintenance challenges.

However, while K8s provides strong support for high concurrency processing of applications, its complex configuration and management processes still place high demands on developers' technical skills. In order to lower this threshold and enable more developers to take full advantage of K8s, the Low-Code Platform was born. By providing a visual development environment, drag-and-drop component libraries, and automated deployment and management tools, the low-code platform greatly simplifies the application development, testing, and deployment processes, enabling developers to complete high-quality software development with less code and in less time.

Therefore, this article will delve into how low-code platforms can be combined with K8s to support and enable highly concurrent applications.

Introduction to K8s

Earlier, organizations were running applications on physical servers. The inability to limit the resource usage of applications running on physical servers led to resource allocation problems. For example, if multiple applications were running on the same physical server, it was possible for one application to consume most of the resources, resulting in performance degradation for other applications.

As a result, virtualization technology was introduced. Virtualization allows you to run multiple virtual machines (VMs) on a single physical server CPU. Virtualization allows applications to be isolated from each other across different VMs and provides a level of security because one application's information cannot be freely accessed by another application.

While virtualization technology also improves resource utilization, each virtual machine runs its own operating system kernel, which leads to higher resource consumption, including memory and storage space, and subsequently higher costs and less flexibility. As a result, container deployments have become popular. Containers are similar to VMs, but with looser isolation characteristics that allow containers to share an operating system (OS) with each other. Containers are considered more lightweight than VMs. And similar to VMs, each container has its own file system, CPU, memory, process space, etc. Because they are separate from the infrastructure, they are portable across clouds and OS distributions.

Containers are a great way to package and run applications. In a production environment, you need to manage the containers that are running your application and make sure that the service doesn't go offline. For example, if one container fails, you need to start another container. Wouldn't it be easier if this behavior was left to the system? So it's the turn of Kubernetes, which provides a framework for elastically running distributed systems. To meet the expansion requirements of the system, you can achieve load balancing, storage orchestration, self-healing and other features.

Kubernetes is a portable and scalable open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. Kubernetes has a large and fast-growing ecosystem of services, support, and tools that are widely used. The name Kubernetes is derived from the Greek word meaning "helmsman" or "pilot". The name Kubernetes comes from the Greek word meaning "helmsman" or "pilot", which is used to "steer" containers to achieve a wide range of functionality.

Introduction to Low-Code Platforms

Low-code development is a software development methodology designed to speed up application development and delivery time by reducing the effort of writing code manually. It is based on graphical interfaces and visualization tools that enable developers to create applications using simple operations such as drag and drop and configuration. Low-code development has some of the following features:

  • Low technical threshold: The low-code platform lowers the technical threshold through a graphical interface and predefined components, enabling non-professional programmers to participate in application development.
  • High development efficiency: Low-code platforms can significantly improve development efficiency and shorten development cycles due to the reduced effort of writing code.
  • Flexibility and scalability: The low-code platform supports customized components and business logic to meet different application scenarios and needs, while supporting subsequent functional expansion and upgrading.
  • Easy maintenance and upgrading: The low-code platform provides a unified development environment and management tools, making application maintenance and upgrading simpler and more efficient.

Low-code development has the following advantages:

  • Rapid development: no need to write a lot of code, greatly reducing the development cycle.
  • Lower the threshold: non-technical people can easily get started, lowering the threshold of application development.
  • Easy to maintain: Due to the modular design, the application is easy to maintain and expand.
  • Cost savings: Reduced labor and time costs save companies money on development.

In the future, the low-code platform will continue to develop in the direction of more intelligent, integrated and cloud-native, providing enterprises with more efficient and flexible application development solutions. At the same time, as the technology continues to mature and the market continues to expand, low-code platform will play an important role in more fields and scenarios. Next will enter the main part of this article, introduces how to use low-code integration K8s to achieve load balancing, to deal with highly concurrent application scenarios.

Specific steps to achieve high concurrency with low-code support for K8s

After the introduction of the basic concepts, we can introduce how to use low-code platform to integrate k8s to achieve load balancing, there are a lot of code development platforms on the market, that this time to enterprise-level low-code development platform - themovable typefaceAs an example, we introduce the load balancing scheme for the live grid k8s.

environmental preparation

  • Word Grid Designer
  • Live Character Grid Service Manager

Installing K8s

The environment is the basis of software operation, so we need to prepare a minimum of two nodes of the K8s environment, a file server, an image repository. k8s as a cluster deployment program , the hardware device needs at least two nodes or more. In this example, the preparation of three nodes, the node situation is as follows:

  • Log in to k8s-server and install k8s. This article does not describe the installation of k8s, the installation of k8s can refer to the following documents, or other related technical documents. k8s installation tutorials can be referred to:/zh-cn/docs/tasks/tools/install-kubectl-linux/
  • If you're just learning, you can emulate k3s. k3s is a fully compliant Kubernetes distribution, with a strategy that's almost identical to k8s, but requires half the resources of k8s, which makes it ideal for learning and validation (this tutorial uses k3s to build the environment). You can also use minikube to simulate a k8s cluster on your personal computer.
  • When your environment is installed, you should be able to use kubectl commands on the server nodes to manage k8s normally. Use kubectl get node to check the status of the cluster nodes.

file server

In order to ensure that the public resources in the nodes can be shared, we need to prepare a file server and create a shared file directory and make sure that this shared path can be mounted to a specified path in each node. The protocol for the file server can follow your existing standards, such as NAS, NFS, etc. In this article, we have chosen to build a shared file directory at k8s-master via NFS.

  1. Please install the nfs dependencies for all nodes.
  2. Log in to k8s-master and create the file root directory fgc-k8s-lbroot. the following 5 directories can also be created on different file servers or in different directories, just create them according to the actual situation (customizable). And create 5 subfolders in this directory (the name cannot be changed).
# Root directory for sharing
sudo mkdir /fgc-k8s-lbroot
# Directory for attachment storage
sudo mkdir /fgc-k8s-lbroot/ForguncyAttach
# Log storage path
sudo mkdir /fgc-k8s-lbroot/ForguncyLogs
# Backup storage path
sudo mkdir /fgc-k8s-lbroot/ForguncyRestore
# Website storage path
sudo mkdir /fgc-k8s-lbroot/ForguncySites
# Website executable storage path
sudo mkdir /fgc-k8s-lbroot/ForguncySitesBin
  1. Modify the user group for the current directory and give it read/write permissions.
sudo chown nobody:nogroup /fgc-k8s-lbroot/*
sudo chmod -R 777 /fgc-k8s-lbroot/*
  1. Export the current directory so that it can be shared externally.
# This file is one of the configuration files for the NFS server. By editing this file, the system administrator can control which file systems can be mounted and accessed by remote hosts via the NFS protocol.
sudo vim /etc/exports
# Add the following configuration to the file to make it shared, save and exit when done
/fgc-k8s-lbroot *(rw,sync,no_subtree_check)
# Refresh the configuration after exiting to make sure it takes effect
sudo exportfs -arv
  1. The directory /fgc-k8s-lbroot can then be mounted and read/written by other servers. You need to go to all worker servers and mount the directory.
# Log in to the worker server
ssh k8s-worker1
# Create the mount path
sudo mkdir -p /mnt/fgc_k8s_lb
# Open the system mount configuration file
sudo vim /etc/fstab
# Configure the address of the directory where the file management will be shared out in the configuration file
198.19.249.12:/fgc-k8s-lbroot /mnt/fgc_k8s_lb nfs hard,intr,rw 0 0
  1. Note that the mount configuration in the previous step will be automatically mounted after the node is rebooted; if it is not rebooted, you will need to mount it manually with the mount command.
  2. The file share mount is complete and you can read and write to the shared file directory under the mount path of any node.

OK, now that the k8s environment for Live Grid has been set up, let's officially kick off the installation in the next session!

Installing the Live Grid Server

Installing Helm

To facilitate k8s package management, we need to introduce Helm to manage the configuration templates for the Live Grid service.

# Download the installer and install it
wget /helm-v3.14.
sudo tar -zxvf helm-v3.14.
sudo mv linux-amd64/helm /usr/local/bin/helm
#Viewing the version
helm version

After the installation is complete, execute the View Version command to get the result as shown in the figure:

Download the Live Character Load Balancing Server installation file

Click Hzg-K8S-HelmChart-10.0.3. to download the file and extract it to a local directory, you will see the following directory files:

Detailed settings are as follows:

  • : For the main configuration file, if you use the offline repository, please modify the repository address and Tag, otherwise you will use the official mirror by default, please change the file server address to the one you actually use.
  • : You can change the name of the Chart, usually you don't need to change it, keep the default.
  • : Start a redis service by default, you can delete this file if you don't need it, and delete the file together when you delete it.
  • : This is the service file for the logging service. In the case of load balancing, the logging runs under a separate POD and does not support load balancing, there can only be one pod, which generally does not need to be modified.
  • : The service file for the Live Grid service, which generally does not need to be modified.
  • : The PV file of K8S, please modify it according to your own storage situation, if it is the file service of NFS protocol, usually you don't need to modify it.
  • : PVC files for K8S, which generally do not need to be modified.
  • : POD file for redis started by default, you can delete this file if you don't need it, delete the file together when you delete it.
  • : The POD file for the Live Grid log, which generally does not need to be modified.
  • : The POD file for the Living Grid service, which generally does not need to be modified.

Modify the contents of the chart file

As you can see from the details above, the configuration of templates is the main focus of our attention. In general, there is no need to modify the configuration items in the template, only the values of the variables in the template can be modified. For the environment in this paper, the modified file is as follows:

Packaging and Installation

After the configuration is complete, in the root directory where the chart is located, directly run helm package your-chart-name to package our k8s configuration into a standard chart file with a tgz suffix. After that, run the helm install command to install k8s for the chart corresponding to the Live Grid service.

#Package, after executing the following package command, a forguncy-k8s-server-10.0.0.file will be generated, that is, the helm installation file
helm package mychart-forguncy
# Use the output of the above forguncy-k8s-server-10.0.0. file to install the live grid service (myfgcchart-test3 in the following command is a customized installation name, it is recommended not to be too long)
helm install myfgcchart-test3 forguncy-k8s-server-10.0.0.

After executing the above commands, the Live Grid load balancing service has been installed and entered the configuration phase. If you need to update or uninstall it, please execute the corresponding helm command, for example:

#Update with new files
helm upgrade myfgcchart-test3 /root/forguncy-k8s-server-10.0.0. --values /root/mychart-forguncy/
# Uninstall the installed Live Grid service
helm uninstall myfgcchart-test3

At this point, the Live Grid service has been successfully installed on the k8s cluster, and we can check the status of the Live Grid service via the kubectl command:

Service Description

We will create 3 deployments, forguncy-pod, redis-pod, influx-pod:

  • forguncy-pod: the pod set declaration of the living word grid service, running all services of the living word grid, including the console and the business applications we publish. It is the main program POD for the living word grid service, and can have multiple copies.
  • redis-pod: A pod set declaration for the redis service, running the redis service, for state sharing when load balancing live grid applications. Functionality is similar to redis for the original load balancing policy. It is a redis service that can optionally be installed
  • influx-pod: pod set declaration for the influxDB service, running the influxDB service, which serves the logging module of the live grid. It is the logging system POD for the live grid service and can have only one copy.

At the same time, a service is created for the corresponding deployment, and the service is made available on the port exposed by the service. The service can be accessed from any k8s node IP plus port number, for example, the k8s management console can be accessed from the following address:http://198.19.249.12:32666/UserService/AdminPortal/. Of which, 198.19.249.12 is the IP of k8s-server and 32666 is the port that forguncy-service exposes to the public. The result of accessing it directly in a browser is shown in the figure.

Configuring Load Balancing

Enable load balancing

After installing the Live Grid service, it is not load-balanced by default and you need to enable the relevant settings manually. With a single pod, we log in to the Live Grid server management console and enable load balancing.

The corresponding load balancing entry has not changed and remains in the settings list. When load balancing is enabled, the user information database needs to be configured into the outreach database. In addition, the redis service also needs to be configured. Note that given the nature of k8s, each time a pod is created or destroyed, its IP will change, so the redis service should be configured as the name of the service, not a specific IP.

ps: The default redis service is installed without a password.

After testing the connection successfully, clicking Save will restart the service for the live grid and will regenerate the new forguncy-pod.

At this point, in order to ensure normal access to subsequent applications and nodes, please ensure that your live grid cluster has activated the k8s corresponding authorization.

Dynamic expansion

Once we have enabled load balancing and restarted successfully, the operation on the live grid side is complete. All subsequent scaling and self-healing behavior relies on the behavioral policies of k8s itself to perform.

Currently, there is only one pod for the Living Grid service, but now we can dynamically expand the nodes according to our actual needs.

# commander-in-chief (military) pod The nodes are upgraded to 6 classifier for individual things or people, general, catch-all classifier
kubectl scale deployment fgc-chart-test-forguncy-pod --replicas=6

This command is a dynamic scaling command that comes with kubectl. k8s will dynamically create pods containing services based on the replicas you set and load them automatically. You can check the current pod status with kubectl get pod. You can also view the current pod status via kubectl get pod. You can also view the pod's progress during scaling with the -watch parameter.

If you wish to see which node the request was directed to, you can do so by modifying the corresponding configuration item switch. The corresponding configuration path is located at:

/fgc-k8s-lbroot/ForguncySites/SettingsFolder/

Just set the value of ShowXUpstream to true. After you reset the number of pods, you can see the names of the pods you are pointing to via a cookie.

Application Publishing and Routing

The application can now be published to the cluster. When the app is published successfully, the app's setup domain is automatically configured as http://{publishserver}/{appname}, for example:http://198.19.249.12:32666/stock-management, among others:
{Publishserver} 198.19.249.12:32666 is the server you configure to publish in the designer.

{Appname} stock-management is the name of the application you published in the designer.

Note that since the application service is cluster-based, you must use outbound libraries; applications using built-in libraries will access exceptions.

Of course, in actual O&M scenarios, we can need to route application services. For some of the services in the live grid, we also need session hold functionality, so we need to enable session hold configuration for Ingress and Service in k8s (Service has been configured by default in the install script, Ingress needs to be added manually).

For example, Ingress adds Cokkie-based annotations:

/affinity: "cookie" 

summarize

So far low-code how to integrate K8s to achieve load balancing to deal with high concurrency scenarios to here on the end of the sharing, thank you for reading!

Extended Links:

From Form-Driven to Model-Driven, Explaining the Trends in Low-Code Development Platforms

What is the low-code development platform?

Branch-based versioning to help low-code move from project delivery to customized product development