Location>code7788 >text

Some features of nginx

Popularity:70 ℃/2024-07-24 22:41:10

I. Layer 4 (tcp/udp) proxy

Since nginx does not support four-layer proxies by default, you need to add the module with-stream to your installation.

./configure --with-stream
# View currentnginxWhat modules are installed
root@proxy[05:52:09]:/usr/local/nginx
$ sbin/nginx -V
nginx version: nginx/1.24.0
built by gcc 8.5.0 20210514 (Red Hat 8.5.0-20) (GCC)
configure arguments: --with-stream

【1】、Creating a cluster

This cluster is used for ssh login load balancing, so we can't configure it in http curly braces anymore. We need to create a new cluster configuration

stream{
    upstream ssh_server{
    server 192.168.121.171:22;
    server 192.168.121.172:22;
}
    server{
        listen 12345;
        proxy_pass ssh_server;
}
}

Since nginx listens on port 12345, after starting nginx again, nginx will listen on two ports 80 and 12345

root@proxy[06:04:15]:/usr/local/nginx
$ sbin/nginx 
root@proxy[06:04:28]:/usr/local/nginx
$ ss -tunlp | grep nginx
tcp   LISTEN 0      511          0.0.0.0:80         0.0.0.0:*    users:(("nginx",pid=5549,fd=7),("nginx",pid=5548,fd=7))
tcp   LISTEN 0      511          0.0.0.0:12345      0.0.0.0:*    users:(("nginx",pid=5549,fd=6),("nginx",pid=5548,fd=6))

【2】、Perform a test

IP:192.168.121.170

port:80

We set up the ssh_cluster

Make nginx listen on port:12345

At this point the client accesses ssh 192.168.121.170 -p 12345

The proxy automatically distributes tasks to the servers behind it

In preparing another host for ssh access

At this point in the access can not access nginx port 22, to access port 12345, and then exit, and then ssh login to see the effect of polling

ssh 192.168.121.170 -p 12345

⚠️ may in turn have the following problem

After my first successful ssh login and logout, it will not let me log in when using ssh login. At this point we need to set the~/.ssh/know_hostsDelete the file and log in again

Then delete this file before every ssh access

Reasons for this situation:

Linux has a protection mechanism that exists when we first ssh log in to a machine and then~/.ssh/know_hostsThe file will generate a record IP ---- machine. When we ssh this IP again the next time, he will do a checksum of the IP ---- machine to see if it matches the last one.

As we set up the cluster, there will be a polling effect occurs, which leads to, although we access the IP is the same, are nginx cluster IP, but finally logged in to the host is not the same, it is due to this situation, resulting in when we first logged in to successfully exit, and then log in to the error will be reported. At this point we will~/.ssh/know_hostsJust delete the file, you need to delete the file once after each login to see the effect of cluster polling.

If the machine you build the cluster on is cloned from a single machine, then you won't have this situation, and since it's cloned from a single machine, then all the machines are the same, and there's no problem with matching failures

image-20240619222804579

Second, nginx concurrency test

Conducting stress tests

nginx as a test subject

web01 as a large group of people to access nginx

Stress testing tools: httpd-tools

Maximum number of files open on Linux system default: 1024

You can permanently change the maximum number of Linux system files that can be opened by modifying the /etc/security/ file (effective after a reboot)

# View the maximum number of files open on a Linux system
root@proxy[18:26:32]:/usr/local/nginx
$ ulimit -n
1024
# Modify the maximum number of open files on a Linux system (temporary modification)
root@proxy[18:30:22]:/usr/local/nginx
$ ulimit -n 100000
root@proxy[18:31:20]:/usr/local/nginx
$ ulimit -n
100000

# Permanently change the maximum number of files that can be opened on a Linux system by modifying /etc/security/.
#* soft core 0
#* hard rss 10000
# Modify the maximum number of open files on a Linux system to
* soft nofile 100000
* hard nofile 100000
# * *: means any user can take effect soft/hard: soft limit (soft limit can be broken, and a warning will be given when it is broken)/hard limit (can't be broken) nofile: the maximum number of files to be opened

# Stress test on web01 again, -n: specify the number of tests, -c: specify how many people to test
ab -n 100 -c 100 http://192.168.121.170/
# If the test is completed and the following screen appears, the test is successful
Percentage of the requests served within a certain time (ms)
  50% 29
  66% 29
  75% 30
  80% 30
  90% 32
  95% 32
  98% 33
  99% 33
 100% 33 (longest request)

# If we test directly, we will find that nginx concurrency is not high, this is mainly because nginx does not open the "concurrent lock", we need to optimize nginx and then stress test. We need to optimize nginx and then run the stress test.

#user nobody.

worker_processes 2; #user nobody; #worker_processes

#error_log logs/.
#error_log logs/ notice; #error_log logs/; #error_log logs/ notice
#error_log logs/; #error_log logs/ info; #error_log logs/ info; #error_log logs/ info

#pid logs/; #error_log logs/ notice; #error_log logs/ info; #pid logs/

# worker_connections max_connections, this is the same as worker_process, it can be very large in theory, but due to the VM's display, it can't be too large
events {
    worker_connections 50000.

# After modifying the configuration file and then the maximum number of open files on the Linux system, you can run the stress test.
ab -n 8000 -c 8000 http://192.168.121.170/
# We can see that the concurrency can reach 8000-10000, and it will be even higher if it is a real server in a real environment.

Third, nginx supports super-long addresses

When I want to access the page's address if I put it particularly deep, perhaps there might be parameters as follows:

/aa/vv/cc/xxxxx/wwww/rrrr/ggg/?a=123%b=567....

By default, nginx supports 1KB URL queries. When faced with this kind of ultra-long URL transmission, we need to further optimize nginx so that it supports ultra-long address transmission.

We can write a script to test this

/bin/bash /bin/bash
URL=http://192.168.121.170/?
for i in {1..5000}
do
        URL=${URL}v$i=$i
done
curl $URL

# Execute this script, the status code 414 appears, the web page actually exists, but because nginx can't parse such a long URL, the web page can't be viewed, and the 414 error occurs.
root@proxy[19:37:40]:~
$ bash
<html>
<head><title>414 Request-URI Too Large</title></head>.
<body>.
<center><h1>414 Request-URI Too Large</h1></center>.
<hr><center>nginx/1.24.0</center>
</body> </html>.
</html>.

# Modify the nginx configuration file so that it can parse very long URLs, and then add the following to http so that it can parse very long URLs, but this limit can not be unlimited, it is limited by the server's memory
client_header_buffer_size 200k.
large_client_header_buffers 4 200k.


# After restarting nginx and executing the script, the error 414 will no longer be reported.
root@proxy[19:46:04]:/usr/local/nginx
$ bash ~/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto; font-family: Tahoma; }
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>.
</head>.
<body>.
<h1>Welcome to nginx!
<p> <body> <h1>Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="/"></a>. <br/>.
Commercial support is available at
<a href="/"></a>. </p>

<p> <em> Thank you for using nginx.</em> </p>
</body>
</html>.

Fourth, the browser local cache static data

We can use nginx to control the client's browser, when the user visits our website for the first time, the server will transfer the data to the user, it will leave a cache in the client's browser, when the user's next visit, it seems to be the server to transfer the data to the client, but in fact, it's the client's browser's cache to provide the data

By modifying the configuration file of nginx, you can set what kind of websites users visit will retain the cache and for how long.

# When a user visits a website with data types (png|jpg|mp4|html|txt), the browser generates a cache and retains it for 30 days.
location ~* \. (png|jpg|mp4|html|txt)$ {
        expires 30d; }
}