Location>code7788 >text

How many socket connections can a server handle at the same time? How many socket connections can a server handle at the same time?

Popularity:596 ℃/2024-12-19 16:23:16

How many sockets can a server process connect to at the same time?

To understand how many simultaneous connections a server-side process can support, we first need to specify asocket connection The representation of the A connection consists of four parts: [LocalIP:LocalPort:RemoteIP:RemotePort]. For server-side processes, theLocalIP cap (a poem)LocalPort is fixed, and theRemoteIP cap (a poem)RemotePort Then it is changeable. Think about it.RemoteIP How many possibilities can there be?RemotePort And how many possibilities can there be? How many connections can these two combined theoretically support?

Theoretically, the combination is possible:

  • (RemoteIP) 2^32 * (RemotePort) 2^16 = 2^48

This means that a server process can theoretically support 2^48 connections. However, in practice, the number of connections is usually limited by other system resources.

Is it limited by the number of ports?

The first thing that needs to be made clear is thatOnly one port is occupied when the server listens on a port. Establishing a connection with the client does not occupy the server's port.The number of ports is limited to the client, each client occupies a local port only when it establishes a connection.

The number of connections on the server side is not affected by the number of ports.

It's not limited by ports, so what is it limited by?

The number of connections supported by the server is mainly influenced by thefile descriptorIf the number of connections exceeds this value, the application will report an error saying "File Descriptor". Each socket connection takes up one file descriptor, and on Linux, the default number of file descriptors for a user process is usually 1024. If the number of connections exceeds this value, the application will report an error saying "not enough file descriptors".

Fortunately.The number of file descriptors is adjustableThe number of units can be set to 100,000 or more, depending on demand, which is perfectly suited to most applications.

Change to 100w below

// /etc/security/

* soft nofile 1000000
* hard nofile 1000000

// /etc/

-max = 1000000

 

What is a file descriptor? What resources does it take up?

A file descriptor is an "identifier" used by the operating system to identify an open file or socket connection. It doesn't take up a lot of resources, it's just a way of management within the operating system.

So, what resources does a socket take up on the server?

1) Memory

Each socket connection is allocated receive and transmit buffers in kernel space. Assuming a default size of 128KB per buffer, if the server is managing 100,000 connections, the memory required is:

  • 100,000 * 256KB = 24.41GB memory

If the server has less than 24GB of memory but still needs to support 100,000 connections, you can reduce the amount of memory required per connection by adjusting the system's buffer configuration.

For example, the following kernel parameters can be modified to adjust the size of the TCP buffer:

# default configuration 
# The kernel automatically adjusts the buffer size according to the actual network conditions,Floating between minimum and maximum values
net.ipv4.tcp_rmem = 4096 131072 6291456
  • 4096: Minimum
  • 131072: Default value
  • 6291456: Maximum

You can change all of them to 4096, which is 4KB. so that 100,000 connections will only take up 781.25MB

2) Threads

Server-side processes need to use threads to process incoming and outgoing data. Most modern servers use the NIO (non-blocking I/O) model, where a single worker thread can manage multiple socket connections. This is accomplished through theselect maybeepoll mechanisms, the NIO model can efficiently select the socket connections that need to be processed.

Usually.The number of worker threads is fixedThe worker threads are responsible for processing the incoming packets and then handing the complete packet to the application tier thread pool, which executes the actual business logic. These worker threads are responsible for processing the incoming packets, and then handing the complete packet to a pool of threads in the application layer, which are responsible for executing the actual business logic.

Connections ≠ Concurrency: How many socket communications can a server process handle?

In practice, the number of connections does not equal concurrency.concurrencyrefers to the number of connections that are exchanging data at the same moment, whereas thenumber of connectionsrefers to the total number of connections.

A server-side process can manage a large number of socket connections, but if each connection communicates less frequently (e.g., timed data uploads from IoT devices, occasional commands, etc.), then the system can easily cope even with a high number of connections.

For example, in an IoT platform where devices regularly report data and occasionally send anomaly reports or receive commands, the number of connections and traffic that the server can manage in this application scenario is usually not under much pressure.

However, if there is frequent communication between the client and server side of the application and real-time requirements are high (e.g., real-time data transfer, low-latency processing, etc.), additional factors need to be considered. In this case, does the NIO model cause latency? Can a single service node support such frequent connections? Is it necessary to spread the load and use multiple service nodes to increase concurrency?

Off-topic: why does NIO have latency?

The design of NIO features a single Worker thread that manages the communication of multiple Socket connections. Assuming that a Worker thread handles 10 Socket connections at the same time, when these 10 Sockets receive packets at the same time, the processing order will depend on their arrival order. In this case, the last Socket to receive a packet must wait for the first 9 packets to be parsed and distributed, so there will be some delay.

However, in most real-world business scenarios, it is not common for multiple Sockets to receive data at the same time. And, even if latency exists, it usually doesn't have a significant impact on the business, and latency levels are usually within acceptable limits.

summarize

The number of sockets that a server process can connect to simultaneously depends not only on the number of ports, but is limited by file descriptors, memory, threads, and other resources. By adjusting the system configuration and optimizing the architecture, the server can efficiently manage a large number of connections. However, in high-frequency communication scenarios, further architectural optimizations may need to be considered, such as using NIO to handle latency, or spreading the load through a distributed architecture to ensure that the system is able to carry a high number of concurrent connections.