Author: Yang Ming, Kubernetes Evangelist, Public K8s Technical Circle Manager
Recently we have developed a new program, this project is a learning site for independent developers, our goal is to help you build real applications using Figma, Python, Golang, React, VUE, Flutter, ChatGPT and other designs to help you become a full-stack developer of sorts in the
An extension of the foundation of the
Since the project is still in its early stages, we need to be as efficient as possible and reduce duplication of effort so that we can roll out new courses faster. The technology stack used for this project is listed below:
- Front-end: React,, Tailwind CSS
- Backend: Python, Django, MySQL, Celery, Redis
Because the previous project was deployed directly to AliCloud servers using Docker, in order to save costs, naturally, we are still using Docker containers to deploy this project, but because the project involves more services, such as , Celery These are more resource-consuming, resulting in our server performance is not enough, so we need another low-cost way to deployment, preferably one that can also cope with the growing traffic.
About Sealos
In this case, I thought of Kubernetes, because Kubernetes can be a good solution to this problem, but if I build my own Kubernetes cluster, the cost will be higher, and the cost of using a cloud provider's Kubernetes service, such as AliCloud's ACK, Tencent Cloud's TKE, and so on, will not be too low. At this point, I thought ofSealos The cloud service, which is described on its official website:Sealos is a cloud operating system that deploys, manages and scales applications in seconds without cloud computing expertise. It's just like using a personal computer!
Of course the final choice is to use the directSealos Cloud ServicesThis is mainly because of its efficiency and economy:Paying only for containers, auto-scaling eliminates wasted resources and results in significant cost savings.
This really fits our needs very well, I don't need a full Kubernetes cluster at all, to put it bluntly I just need to run a few Pods, I don't need to maintain the whole cluster, which reduces the costs (financial, O&M, etc.) considerably, andSealos
Cloud services also provide a lot of features, such as auto-scaling, monitoring, logging, etc., all of which I need, and with such services, I can focus on development without having to worry about operations and maintenance, so why not?
Sealos
is a Kubernetes-based cloud service, so use theSealos
cloud services, it is recommended that you better understand the basic concepts of Kubernetes, such as Pod, Service, Deployment, Ingress, etc., so that it will be easier to get started, but of course do not understand does not affect you to use, because theSealos
Cloud services provide great interfaces, and you can easily do deployment, management, scaling, and more.
containerization
Of course, we first need to containerize our service, packaged as a Docker image, our back-end service code structure is shown below:
Containerization is, to put it bluntly, packaging our service as a Docker image, and our backend service is a Django project that corresponds to theDockerfile
The contents are shown below:
FROM python:3.11.4-slim-buster
LABEL author=icnych@
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG False
WORKDIR /app
RUN sed -i 's///g' /etc/apt/ && \
apt-get update && \
apt-get install -y pkg-config python3-dev default-libmysqlclient-dev build-essential
RUN pip install --upgrade pip --index-url /pypi/simple && \
pip config set -url /pypi/simple
COPY . .
RUN pip install -r
RUN apt-get clean autoclean && \
apt-get autoremove --yes && \
rm -rf /var/lib/{apt,dpkg,cache,log}/ \
EXPOSE 8000
CMD ["gunicorn", "--bind", ":8000", "--workers", "4", "--access-logfile", "-", "--error-logfile", "-", ":application"]
this oneDockerfile
The file is simple, it's based on thepython:3.11.4-slim-buster
image build, then install some dependencies, and finally run thegunicorn
Start the Django project.
Also note that we're using Celery to handle asynchronous tasks, using Redis as the broker, so we need to configure Celery in the Django configuration file, as shown below:
# ==========Celery===========
REDIS_URL = ("REDIS_URL", "localhost:6379")
REDIS_USER = ("REDIS_USER", "default")
REDIS_PASSWORD = ("REDIS_PASSWORD", "default")
REDIS_BROKER_DB = 1
REDIS_RESULT_DB = 2
CELERY_BROKER_URL = (
f"redis://{REDIS_USER}:{REDIS_PASSWORD}@{REDIS_URL}/{REDIS_BROKER_DB}"
)
#: Only add pickle to this list if your broker is secured
#: from unwanted access (see userguide/)
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_RESULT_BACKEND = (
f"redis://{REDIS_USER}:{REDIS_PASSWORD}@{REDIS_URL}/{REDIS_RESULT_DB}"
)
CELERY_TASK_SERIALIZER = "json"
CELERY_TIMEZONE = "Asia/Shanghai"
CELERY_ENABLE_UTC = True
Here we use environment variables to configure Redis connection information so that it can be configured via environment variables when the container is started. Of course, the database connection information is also configured through environment variables, so that you can easily configure different connection information in different environments.
DATABASES = {
"default": {
"ENGINE": "",
"NAME": ("DB_NAME", "fastclass"),
"USER": ("DB_USER", "root"),
"PASSWORD": ("DB_PASSWORD", "root"),
"HOST": ("DB_HOST", "localhost"),
"PORT": ("DB_PORT", 3306),
"OPTIONS": {"charset": "utf8mb4"},
}
}
Next is the front-end project, which has the following structure:
It is also necessary to containerize our project with the correspondingDockerfile
The contents are shown below:
FROM node:18.19.0-alpine3.18 AS base
# Check /nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
# 2. Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY . .
# Switching Mirror Sources
RUN npm config set registry /repository/npm/ && npm install sharp && npm install --production && npm run build
FROM base AS runner
WORKDIR /app/web
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
# copy need files(this can exclude code files)
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/docker/ ./
COPY --from=builder /app/docker/ /
COPY --from=builder /app/ ./
RUN npm install pm2 -g && chmod +x /
EXPOSE 3000
ENTRYPOINT ["/bin/sh", "/"]
Here we are using a multi-stage build, first based on thenode:18.19.0-alpine3.18
image build, then install the dependencies, and finally use thepm2
to start the project.pm2
It is a process management tool that makes it easy to manage processes, such as starting, stopping, restarting, etc. Here, we will use a process management tool that allows you to manage your processes in an easy way. Here we'll go through theENTRYPOINT
A script is definedThe content is shown below:
#!/bin/bash
set -e
#if [[ -z "$APP_URL" ]]; then
# export NEXT_PUBLIC_PUBLIC_API_PREFIX=${APP_API_URL}/api
#else
# export NEXT_PUBLIC_PUBLIC_API_PREFIX=${APP_URL}/api
#fi
#
#export NEXT_PUBLIC_SENTRY_DSN=${SENTRY_DSN}
/usr/local/bin/pm2 -v
/usr/local/bin/pm2-runtime --raw start /app/web/
This script essentially starts thepm2
corresponding The contents of the document are shown below:
{
"apps": [
{
"name": "FastClass",
"exec_mode": "cluster",
"instances": 2,
"script": "./node_modules/next/dist/bin/next",
"cwd": "/app/web",
"args": "start"
}
]
}
Finally, the front-end and back-end services are built into Docker images and then pushed to the image repository, so that our services are containerized.
deployments
Now that you have the application images, the next step is simply to deploy these images to theSealos
Just go on top of the cloud service.
The first thing you need to do of course is to register for an account, head over to Registration can be, domestic users need real-name authentication, the initial will provide 5 yuan trial gold, can be used to experience, the entire console interface is shown below:
Then we can choose different available zones according to our business needs, we have chosen here theBeijing ARegion.
Since our back-end application relies on MySQL and Redis databases, we first need to create these two databases, which Sealos also provides a separate database service for (the underlyingkubeblocks
), select New Database, and then select the type of database you want to create, we'll start with a MySQL database.
Then we can configure the database name, CPU, memory resources, we can choose the minimum configuration, if you find that the resources are not enough, you can change the resources at any time, here we configure the CPU and memory is actually the Pod inside thelimit
resources, in addition to the number of instances recommended minimum configuration of 3, so as to ensure the high availability of the database, in the left side of the corresponding resource estimate price display. We can also switch to view the corresponding YAML resource manifest file:
This is actually a CR instance of KubeBlocks. After the configuration is complete, click the "Deploy" button in the upper right corner and wait for the deployment to complete. You can deploy another Redis database in the same way.
Click View Details, on the left side you can see the connection information of the database, which is needed for our back-end service. On the right side, you can see the real-time monitoring information of the database, we can adjust the resource allocation of the database according to this information, in addition to backup database, online import data and so on.
After the database is ready, we can deploy our application, click "Application Management" on the home page to start the new application. First of all, let's deploy the back-end service, fill in the form with the information related to the image of the back-end service. There are two modes of deployment: Fixed Instance and Elastic Scaling. Elastic Scaling is the HPA mode we are familiar with, for example, we configure here that when the memory utilization rate exceeds 80%, it will be automatically scaled up to 2 Pods, and you can configure CPU, memory and other resources as well.
If you want to expose your backend service to the outside world, you can enable public access in the network configuration, that is, create an Ingress resource so that you can access our backend service through the public network, which we don't need for the moment so we don't check the box.
You also need to configure environment variables in the advanced configuration, such as database connection information, Redis connection information, etc., so that our back-end services can connect to the database and Redis.
We simply fill in some configuration information on the page, which actually corresponds to some resources inside Kubernetes, such as Deployment, Service, Ingress, HPA, Secret, and so on.Sealos
The cloud service generates the corresponding resource manifest file based on our configuration information and then deploys it on top of the Kubernetes cluster.
Configuration is complete, click the upper-right corner of the "Deploy" button, wait for the deployment to complete can be deployed after the completion of the deployment we can see in the application list of our back-end services have been deployed successfully.
Deploy the front-end service in the same way, it should be noted here that public access needs to be enabled because our front-end service needs to be exposed to the public so that users can access our website.
And you need to pay attention to open the public network access, the system will automatically assign us a domain name, we certainly want to use their own domain name, so you need to customize the domain name, you need to add a custom domain name in the domain name service providersCNAME
resolves to this auto-generated domain address.
This way our front-end service can be accessed through the custom domain name, but there is another problem here, which is that our front-end service still needs to access our back-end service, when we access the will access the front-end service, but when accessing the
/api/xxx
When you need to access the back-end service, you need to configure an Ingress routing rule that will/api
Routing is forwarded to the back-end service, but theSealos
Configuring Ingress routing rules is not currently supported on the cloud service's page, so we need to configure Ingress routing rules manually.
Clicking on "Terminal" will take you to the terminal interface of the Kubernetes cluster. We can then manage the Kubernetes cluster via kubectl by first setting the The https certificates are imported into the Kubernetes cluster, a Secret object is used to store the certificate information, and then we modify the automatically generated Ingress object to replace the https certificates with our own certificates.
Then a new Ingress object is created to configure the routing rules for the back-end service so that our front-end service can access the back-end service, as shown below:
This will allow us to access our site normally. In addition, we need to deploy the Celery service to handle the asynchronous tasks on the backend. Similar to deploying the backend service, we use the same image and environment variables, but note that we are not starting the web service here, but the Celery Worker service, so we need to change the startup command.
There is also a timed task service that is used to perform some tasks at regular intervals, here we are using the Celery Beat service, again you need to use the same image and environment variables, but you need to modify the startup command, the command is shown below:
celery -A fastclass beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
In the end, we deployed 4 services, which are Front-end service, Back-end service, Celery Worker service, Celery Beat service, so that our application is deployed.
Subsequent to other functional services, we can also use microservices to deploy, very convenient.
existKubePanel
You can also see the resource usage of our entire cluster, which is equivalent to a Kubernetes cluster dashboard.
If there are features of Kubernetes that are not supported by the console page, you can use the terminal to do this on your own, just as you would with a Kubernetes cluster.
Additionally we can set the cluster'skubeconfig
file locally so that we can use kubectl commands to manage the cluster locally.
Detailed billing information for our purchases is also available in the Expense Center.
We're only using a portion of Sealos' functionality here, and there are many other features such as object storage, cloud development, and other capabilities that you can explore on your own.
Isn't this easier and more convenient than buying an ECS server and tossing it in? The cost is also much lower, if you are a startup team, need to run in small steps, quickly validate the product, then theSealos
Definitely one of the best options for your application hosting.