Location>code7788 >text

Nginx-based large-scale Internet cluster architecture and combat program

Popularity:649 ℃/2024-10-10 21:07:29

1. Nginx Load Balancing Basic Configuration

First, build a basic Nginx load balancer for distributing traffic to multiple backend servers.

Step 1.1: Install Nginx

On each server that you want to use as a load balancer, install Nginx. this can be done using a package management tool, such as the following command on Ubuntu:

sudo apt update
sudo apt install nginx

Step 1.2: Configure Nginx Load Balancing

At the heart of Nginx are configuration filesWe can define a pool of backend servers and a load balancing policy within it. Here is a simple Nginx load balancing configuration:

# Define a pool of backend servers called backend
upstream backend {
   server .com weight=5;  # Setting weights
   server .com weight=3;
   server .com weight=2;
   
   # Enable health checks (requires Nginx Plus support for out-of-the-box configurations, open source versions require third-party modules)
   # Nginx Plus Example:
   health_check interval=10s fails=3 passes=2;
}

# Configure the HTTP server
server {
   listen 80;
   server_name ;

   location / {
       proxy_pass http://backend;
       proxy_set_header Host $host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Configuration Details:

  • upstream command: Defines a pool of backend servers and assigns different weights to each, based on which Nginx distributes traffic to the backend servers in proportion to their weights.

  • health checkup: This configuration periodically checks the availability of back-end servers to ensure that requests will not continue to be sent to them when a particular server is down.

  • proxy_pass: Proxies client requests to a pool of back-end servers.

Step 1.3: Starting and Testing Nginx

After ensuring that the configuration is correct, start or restart the Nginx service:

sudo nginx -t  # Test the configuration file for correctness
sudo systemctl restart nginx  # Restart Nginx

Test: by accessing, verifying that requests are evenly distributed to back-end servers.

2. High availability configuration (Keepalived + Nginx)

Load balancing with Nginx alone still faces a single point of failure. If the front-end Nginx goes down, the entire service will be unavailable. Therefore, we need to implement a highly available Nginx cluster with Keepalived.

Step 2.1: Install Keepalived

Install Keepalived on each Nginx server. using Ubuntu as an example:

sudo apt install keepalived

Step 2.2: Configure Keepalived

Keepalived implements failover through virtual IP addresses (VIPs). When the primary server goes down, the VIP automatically switches to the standby server, ensuring high service availability.

On the primary Nginx server, edit the configuration file for Keepalived/etc/keepalived/

vrrp_instance VI_1 {
  state MASTER  # Primary server
  interface eth0  # Network interfaces
  virtual_router_id 51
  priority 100  # Higher priority for master servers
  advert_int 1  # Broadcast intervals
  authentication {
      auth_type PASS
      auth_pass 123456  # Password
  }
  virtual_ipaddress {
       192.168.0.100  # Virtual IP address
  }
  track_script {
      chk_nginx  # Scripts to monitor Nginx status
  }
}

On the standby Nginx server, set thestate set toBACKUPand willpriority Set to a lower value, e.g. 90.

Step 2.3: Monitoring Nginx Status

Keepalived can decide whether or not to switch VIPs by monitoring Nginx's operational status. create a monitoring script/etc/keepalived/check_nginx.sh

#!/bin/bash
if ! pidof nginx > /dev/null
then
  systemctl stop keepalived  # If Nginx stops, shut down Keepalived to trigger VIP switching
fi

Add this script as executable:

sudo chmod +x /etc/keepalived/check_nginx.sh

Add a monitoring script to the Keepalived configuration file:

vrrp_script chk_nginx {
  script "/etc/keepalived/check_nginx.sh"
  interval 2
}

Step 2.4: Starting Keepalived

After completing the configuration, start or restart the Keepalived service:

sudo systemctl restart keepalived

Test: Shut down Nginx on the primary server and VIP should automatically switch to the standby server to ensure uninterrupted service.

3. Nginx Health Checks and Dynamic Scaling

Nginx can incorporate health checking features to ensure that only servers in a normal state participate in load balancing. In addition, dynamic scaling is the key to dealing with bursty traffic. Below are the relevant configurations and real-world scenarios.

Step 3.1: Configure Health Check (open source version)

The open source version of Nginx does not come with its own health check module, which can be accessed through a third-party module such as thengx_http_healthcheck_module) to implement the health check. Assuming this module is installed, the configuration is as follows:

upstream backend {
server ;
server ;
server ;

# Using third-party modules to implement health checks
check interval=5000 rise=2 fall=5 timeout=2000;
}

Step 3.2: Dynamically Scale the Backend Server

In conjunction with containerization technologies such as Docker or Kubernetes, back-end servers can be automatically scaled based on traffic. For example, the number of replicas of an application service can be automatically scaled in a Kubernetes cluster using Horizontal Pod Autoscaler (HPA).

The following is an example of configuring auto-scaling in Kubernetes:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: backend-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70 # (coll.) fail (a student) CPU Utilization rate over 70% delayed expansion

In this way, back-end services can be dynamically scaled according to the load, and Nginx can automatically identify new back-end servers by configuring the service discovery mechanism.

4. Nginx SSL/TLS Configuration

In a production environment, enabling HTTPS is essential. The following is the configuration for enabling SSL/TLS:

Step 4.1: Generate or Obtain an SSL Certificate

Generate a free SSL certificate with Let's Encrypt:

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d

Step 4.2: Configure Nginx to Use SSL

server {
listen 443 ssl;
server_name ;

ssl_certificate /etc/letsencrypt/live//;
ssl_certificate_key /etc/letsencrypt/live//;

location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

# self-adjusting HTTP The request is redirected to the HTTPS
server {
listen 80;
server_name ;
return 301 https://$host$request_uri;
}

summarize

With Nginx load balancing, Keepalived for high availability, dynamically scaling back-end servers, and health checks, an efficient, scalable, and highly available Internet cluster architecture is built.