We are.Kangaroo Cloud Stack UED Team, is committed to building an excellent one-stop data middleware product. We always maintain the spirit of craftsmanship and explore the front-end path to accumulate and spread the value of experience for the community.
Author: Liu Yi
This one is a series of articles:
Building an Automated Web Page Performance Inspection System -- Designing it
Building an Automated Web Page Performance Inspection System -- Implementation
As a front-end want to do full-stack project, may be the first thought is node + vue/react, at the beginning may be a new multiple project directory to realize, assuming that respectively for the web and server, and perhaps the management of the backend of the code admin, then there are three projects of code. At this point, in order to facilitate the management of the remote repository you need to create a new group to unify the management of the code, generally this approach is called MultiRepo.
This is obviously not concise enough and not easy for developers to develop and deploy. For this kind of multi-module projects we can introduce the concept of Monorepo, here are some attempts to optimize the approach to theyice-performance (easy to measure) As an example, the local device is the arm64v8 platform with the M1 chip.
First, node hosting static pages
You can give the web packaged code to node to host, then you can put the web code as a folder in the server's directory, and then we can usually access the root path of the backend interface directly. For example:yice-performance - v1.0
The corresponding nginx configuration is typically:
server {
listen 80;
server_name ;
location / {
proxy_pass http://localhost:4000/;
}
}
Common node frameworks support hosting static file directories:
// express
(((__dirname, 'web/dist')));
// NestJS
import { ServeStaticModule } from '@nestjs/serve-static';
({
serveRoot: '/',
rootPath: join(__dirname, '.', 'web/dist'),
}),
// egg
{
static: {
dir: (, 'web/dist'),
}
}
The code is basically similar, from the nginx configuration and project structure we can also see that this still belongs to the structure of a node project, front-end project nginx configuration is generally:
server {
listen 80;
server_name ;
root /opt/dtstack/yice-performance/web/dist/
location /api {
proxy_pass http://localhost:4000/;
}
location / {
try_files $uri $uri/ /;
}
}
II. Turborepo
Turborepo is a high-performance build system for JavaScript and TypeScript codebases.
leverageTurborepo We can run and build code in parallel, and when we use the traditional yarn workspace to manage our code, we typically execute the following commands:
# server
yarn
yarn dev
# web
cd web
yarn
yarn dev
In this case, local developers not only need to open two terminals at the same time, but also have to pay attention to the paths of the two terminals, as is the case with the commands lint, build, and test.
To do the above faster, you can use theturbo run lint test build
。
Newer projects tend to be more likely to use Turborepo, using thecreate-turbo
Just create it, refer toofficial document. Historical projects that want to use Turborepo need to pay attention to the project structure:
yice-performance
├─
├─
├─
├─
├─apps
| ├─server
| └─web
Consolidate the code of historical projects into a single folder and move it into apps, note that you need to change the relative paths and other code, such as Documentation on
@/*
The way path aliases such asimport
The path to the dependencies, and the public dependency packages are uniformly mentioned in the root directory.
In the root directory add file, in this case the dev and build commands:
{
"$schema": "/",
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": [".apps/server/dist/**", "!.apps/server/cache/**"]
},
"dev": {
"persistent": true,
"cache": false
}
}
}
Then add the two commands in turn to the product under apps:
{
"scripts": {
"dev": "NODE_ENV=development nest start --watch",
"build": "NODE_ENV=production nest build"
}
}
{
"scripts": {
"dev": "NODE_ENV=development vite --port 7001",
"build": "tsc && NODE_ENV=production vite build"
}
}
This allows you to pass thepnpm dev
One command to start multiple services at the same time.pnpm build
Packaging of multiple projects can be done quickly.
III. docker
In the case of Puppeteer, which is relied upon by eTest, there are more requirements for the device environment.Puppeteer TroubleshootingFor example, the weekly data report feature added to the eTest version uses echarts on the node side, which ultimately relies on thenode-canvasThe requirements for the equipment environment are also very demanding.
At the same time, the deployment commands are written in scripts that take into account the differences between different environments, such as in Windows.docker
The role here is to smooth out the environment differences between different devices, reduce the pain of supplemental installation of dependency packages, amd64, arm64 and other environment differences caused by the failure of the dependency package installation problem, we can build docker image packages for different platforms (hereinafter referred to as the "docker package").linux/amd64
As an example, it is often referred to asx86_64
(Architecture).
Dockerfile
Local preparationDockerfile
file, and then execute thedocker build
command to build the image. Before building the image, you should note that Dockerfile builds the image with afloor (of a building)
of the concept will have a greater impact on the build time.
A Docker image is made up of multiple read-only layers stacked on top of each other, with each layer built on top of the previous one.Each command in the Dockerfile file creates a new layer and modifies the image by executing the
docker build
command will use the cache, and when the previous layer does not change, it will be faster when we build the image again. However, since each layer is built based on the previous layer, we should put the operations that are less likely to change in the front, and subsequent changes will only build the changes without building the entire image, which can greatly speed up the image build.
For example, below hit the nail on the head
nodejs
The installation, if placed in theCOPY . .
After that, you will need to install it once per buildnodejs
, we can greatly reduce build time by utilizing caching.
FROM ubuntu:22.04
# Setting the time zone
ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
&& apt-get update -y && apt-get install -y tzdata
# puppeteer cap (a poem) node-canvas Requirements for system dependencies
# /Automattic/node-canvas?tab=readme-ov-file#compiling
# /puppeteer/puppeteer/blob/puppeteer-v19.6.3/docs/#chrome-headless-doesnt-launch-on-unix
RUN apt-get update -y \
&& apt-get install -y build-essential libcairo2-dev libpango1.0-dev libnss3 libatk1.0-0 \
&& apt-get install -y ca-certificates fonts-liberation libasound2 libatk-bridge2.0-0 \
&& apt-get install -y libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 \
&& apt-get install -y libgbm1 libgcc1 libglib2.0-0 libgtk-3-0 libnspr4 libpangocairo-1.0-0 \
&& apt-get install -y libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 \
&& apt-get install -y libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 \
&& apt-get install -y libxss1 libxtst6
# deal with chromium et al. dependency issues
# /puppeteer/puppeteer/blob/puppeteer-v19.6.3/docker/Dockerfile
RUN apt-get update -y \
&& apt-get install -y wget gnupg \
&& wget -q -O - /linux/linux_signing_key.pub | gpg --dearmor -o /usr/share/keyrings/ \
&& sh -c 'echo "deb [arch=amd64 signed-by=/usr/share/keyrings/] /linux/chrome/deb/ stable main" >> /etc/apt//' \
&& apt-get update -y \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-khmeros fonts-kacst fonts-freefont-ttf libxss1 --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get remove -y wget gnupg
# deb [arch=amd6 Configurations may be added to the /etc/apt// cap (a poem) /etc/apt// duplicate,Try again.
RUN rm -rf /etc/apt// \
&& apt-get update -y \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-khmeros fonts-kacst fonts-freefont-ttf libxss1 --no-install-recommends
# mounting nodejs
RUN apt-get update -y && apt-get install -y curl \
&& curl -fsSL /setup_18.x | bash - \
&& apt-get remove -y curl \
&& apt-get install -y nodejs \
&& npm config set registry / \
&& npm install [email protected] -g
# Setting up the working directory
WORKDIR /yice-performance
# 拷贝代码mounting依赖
COPY ./
COPY apps/server/ ./apps/server/
COPY apps/web/ ./apps/web/
RUN pnpm install
# Copying project files
COPY apps .env ./
# minimize node_modules diskspace
RUN pnpm build \
&& find . -name "node_modules" -type d -prune -exec rm -rf '{}' + \
&& pnpm install --production
# expose a port
EXPOSE 4000
# Defining Environment Variables
ENV NODE_ENV=production
# Dockerfile You need to specify in the chromium trails
ENV PUPPETEER_EXECUTABLE_PATH='google-chrome-stable'
VOLUME [ "/yice-performance/apps/server/yice-report" ]
# Launching the application
CMD ["node", "apps/server/dist/"]
ARG BASE_IMAGE=mysql:5.7
FROM ${BASE_IMAGE}
# Automatically execute all .sql files under // the container when it starts up
COPY . /mysql/ //
# Additional mysql configuration
COPY . /mysql/my_custom.cnf /etc/mysql// # Additional mysql configuration COPY .
# Set the password for the MySQL root user
ENV MYSQL_ROOT_PASSWORD=123456
ENV MYSQL_DATABASE=yice-performance
# Set the time zone
RUN cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# Expose ports
EXPOSE 3306
Build the image locally based on the Dockerfile file, and when you're done, you'll see the image you just built in Docker Desktop. We create a new script file to unify the management commands and add a new script to the Add
build:docker
Command:
#!/bin/sh
cd docker
# amd64
docker buildx build --platform linux/amd64 -f -t liuxy0551/yice-mysql .
docker buildx build --platform linux/amd64 -f -t liuxy0551/yice-server ../
At this point thepnpm build:docker
You can then package the image.
Multi-Platform Packaging Mirror
Since we are currently using more Mac M-series chips, which are for the arm64 v8 platform, but often our packaged images are used on x86 machines, such as Centos, Ubuntu, and other server systems, it is required that we should be compatible with the x86 platform.
utilizationdocker inspect
command to view the image architecture as follows:
docker pull alpine
docker inspect alpine | grep Architecture
Modify what you just wroteDockerfile
file, which supports the use of thedocker build
commandbuild argument
Pass parameters, this is more useful when specifying the base image to use for different platforms. Some commonly used base mirrors are multi-platform supported and only need to add the--platform linux/amd64, linux/arm64
and docker buildx will take care of everything automatically.yice-mysql
I'm in favor.arm64 v8
The rest of the content can be researched on its own.
Mirror Release
The AliCloud container image service is used here:/。
docker login --username=your_username -p your_password
docker tag liuxy0551/yice-mysql /liuxy0551/yice-mysql:latest
docker tag liuxy0551/yice-server /liuxy0551/yice-server:latest
docker push /liuxy0551/yice-mysql:latest
docker push /liuxy0551/yice-server:latest
docker run
In order to ensure thatyice-server
have access toyice-mysql
,Both containers need to use the same network。
docker network create yice-network
docker run -p 3306:3306 -d --name yice-mysql --network=yice-network -v /opt/dtstack/yice-performance/yice-mysql/conf:/etc/mysql/ -v /opt/dtstack/yice-performance/yice-mysql/log:/var/log/mysql -v /opt/dtstack/yice-performance/yice-mysql/data:/var/lib/mysql /liuxy0551/yice-mysql:latest
docker run -p 4000:4000 -d --name yice-server --network=yice-network -v /opt/dtstack/yice-performance/yice-report:/yice-performance/apps/server/yice-report /liuxy0551/yice-server:latest
-p
indicates port mapping.-p host port:container port
The ports are exposed here so that the data can be viewed externally with a GUI tool.-d
Indicates that it runs in the background and returns the container id--name
Indicates the name assigned to the container-v /opt/dtstack/yice-performance/yice-mysql:/etc/mysql/
The equal mount path indicates that the configuration items, data, and logs in the container are mounted to the host's/opt/dtstack/yice-performance/yice-mysql
arrive at (a decision, conclusion etc)-v /opt/dtstack/yice-performance/yice-report:/yice-performance/apps/server/yice-report
Indicates that the inspection report in the container is mounted to the host machine- The purpose of mounting is to not lose data when the container is deleted and to try to keep the container storage layer free of write operations.
fulfillmentdocker run
command to spawn the container and run it, access thehttp://localhost:4000 You can see the page now.
docker-compose
docker-compose
is an official Docker tool for managing applications across multiple Docker containers using thedocker-compose
Multiple containers can be run in concert.
additional file, in which the services and containers required by the application are defined, including information on mirrors, environment variables, port mappings, mount directories, and so on.
version: '3'
services:
mysql-service:
container_name: yice-mysql
image: /liuxy0551/yice-server:latest
ports:
- '3306:3306'
restart: always
networks:
- yice-network
server-service:
container_name: yice-server
image: /liuxy0551/yice-mysql:latest
ports:
- '4000:4000'
restart: always
depends_on:
- mysql-service
networks:
- yice-network
networks:
yice-network:
driver: bridge
docker-compose -f docker/ -p yice-performance up -d
command | corresponds English -ity, -ism, -ization |
---|---|
docker-compose up | Start the program.-d run in the background |
docker-compose down | Stop and remove containers, volumes, images, etc. |
docker-compose ps | List running containers |
docker-compose logs | View Log |
docker-compose stop | Discontinuation of services |
docker-compose start | Starting services |
docker-compose restart | restart sth. |
IV. Frequently asked questions
yice-server
unbootable
It is possible that the docker version is low, it is recommended to upgrade to docker v24 and above, and should be backed up before upgrading.
yum install docker-ce docker-ce-cli docker-buildx-plugin docker-compose-plugin
node[1]: ../src/node_platform.cc:61:std::unique_ptr<long unsigned int> node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start(): Assertion `(0) == (uv_thread_create((), start_thread, this))' failed.
1: 0xb090e0 node::Abort() [node]
2: 0xb0915e [node]
3: 0xb7512e [node]
4: 0xb751f6 node::NodePlatform::NodePlatform(int, v8::TracingController*) [node]
5: 0xacbf74 node::InitializeOncePerProcess(int, char**, node::InitializationSettingsFlags, node::ProcessFlags::Flags) [node]
6: 0xaccb59 node::Start(int, char**) [node]
7: 0x7f2ffac64d90 [/lib/x86_64-linux-gnu/.6]
8: 0x7f2ffac64e40 __libc_start_main [/lib/x86_64-linux-gnu/.6]
9: 0xa408ec [node]
gcc version too low
Ubuntu is recommended for host deployment.
When deployed in host modeCentOS7
An error occurs when starting the service on theError: /lib64/libstdc++.so.6: version 'CXXABI_1.3.9' not found
This is becauseCentOS7
(used form a nominal expression)gcc
version is too low and needs to be upgraded togcc-4.8.5
Above, execute the following command to see that there is noCXXABI_1.3.9
。
strings /lib64/libstdc++.so.6 | grep CXXABI
RELATED:
/Automattic/node-canvas/issues/1796
/nchaigne/ad06bc867f911a3c0d32939f1e930a11
/gnu/gcc/
cd /etc/gcc
wget /gnu/gcc/gcc-9.5.0/gcc-9.5.
tar xzvf gcc-9.5.
mkdir -9.5.0
cd gcc-9.5.0
./contrib/download_prerequisites
cd ../-9.5.0
../gcc-9.5.0/configure --disable-multilib --enable-languages=c,c++
make -j $(nproc)
make install
ultimate
Welcome to [Kangaroo Cloud Digital Stack UED team]~
Kangaroo Cloud Digital Stack UED team continues to share the results of technology for the majority of developers, successively participated in the open source welcome star
- Big Data Distributed Task Scheduler - Taier
- Lightweight Web IDE UI framework - Molecule
- SQL Parser Project for Big Data - dt-sql-parser
- Kangaroo Cloud Digital Stack front-end team code review engineering practices document - code-review-practices
- A faster, more flexible and easier to use module packager - ko
- A component testing library for antd - ant-design-testing