preamble
Hello everyone, this is Shiraz, analyzing the golang open source high performance websocket library gws.
Stay tuned to 📺B for the video explanation:ShirasawaTalk
present (sb for a job etc)
- gws:/lxzan/gws |GitHub 🌟 1.2k, high performance websocket library, bilingual code comments, suitable for students with development experience advanced learning.
- Two features of gws
-
High IOPS Low Latency(High I/O, low latency)
-
Low Memory Usage(Low memory footprint)
As you can see from the graph below: the higher the payload, the better the performance compared to other websocket libraries, how?
gws chatroom architecture diagram
This is the architecture of the official gws chat room demo, drawn here to help you understand what full-duplex communication is.
WebSocket is the same application layer protocol as HTTP. After TCP completes three handshakes, Golang's net/http library provides the Hijack() method, which hijacks the TCP socket (an active session) from HTTP, and after that, the tcp connection will be managed by WebSocket, which is out of the realm of the HTTP protocol.
And as long as the acquisition of the TCP socket, when to send and receive data, are decided by the application layer, the transport layer of the TCP socket is only the object of the choreography (simplex/duplex), naturally, you can realize the server-side initiative to send data.
buffer pool
Why is it that the higher the payload, the better the performance compared to other websocket libraries?
Reason: All read and write operations in gws use the buffer pool.
binaryPool = (128, 256*1024) // buffer pool
Read buffer: each read is a system call, so you can read a piece of data and use an offset to position the consumption, reducing the number of reads.
Write buffer: each write is a single system call, so you can write to the buffer multiple times and unify the flush.
Buffer Pool: Provides buffer pools for different sizes of buffers, reducing the number of times a large buffer is created, reducing GC pressure & creating objects and destroying them.
// NewBufferPool Creating a memory pool
// Left, right indicate the interval range of the memory pool, they will be transformed into pow(2,n)。
// Below left, Get method will return at least left bytes; above right, Put method will not reclaim the buffer.
func NewBufferPool(left, right uint32) *BufferPool {
var begin, end = int(binaryCeil(left)), int(binaryCeil(right))
var p = &BufferPool{
begin: begin,
end: end,
shards: map[int]*{},
}
for i := begin; i <= end; i *= 2 {
capacity := i
[i] = &{
New: func() any { return (make([]byte, 0, capacity)) },
}
}
return p
}
Use a loop to start from thebegin
until (a time)end
, each time the capacity is doubled (multiplied by two), create a Example.
is a type in the Go language standard library for storing and recycling temporary objects.
Use the buffer pool in thebuffer
through (a gap)conn
(The following steps are typically performed when reading and writing data in (network connection):
-
Getting a buffer from the buffer pool: Use
Get
method from the buffer pool to get abuffer
。 -
retrieve data: If there is a need to start from the
conn
The data can be read by setting thebuffer
Used as the destination for read operations. - Processing data: Process the read data as needed.
-
write data: If you need to write data, you can write the data to the buffer pool obtained from the
buffer
and then from thebuffer
writeconn
。 -
Release Buffer: After use, place the
buffer
Put back into the buffer pool for reuse.
Designing a WebScket Library
When writing a WebSocket library, there are several key points that affect its performance, especially in high concurrency scenarios.
The following part of these scenarios, some demo writing (pseudo-code) is given, from which some general project design methods can be extracted:
- event-driven model: Using a non-blocking event-driven architecture improves performance because it allows the WebSocket library to handle multiple connections in a single thread without blocking by waiting for I/O operations.
package main
import (
"fmt"
"time"
)
func main() {
eventChan := make(chan string)
readyChan := make(chan bool)
// analog (device, as opposed digital)WebSocketgrout
go func() {
(2 * )
eventChan <- "connected"
readyChan <- true
}()
// event processing cycle
for {
select {
case event := <-eventChan:
("Event received:", event)
case <-readyChan:
("WebSocket is ready to use")
return
}
}
}
-
concurrent processing: How libraries handle concurrent connections and messages is an important factor in performance. Using goroutines or thread pools can improve concurrency handling.
-
message compression: Supports message compression (e.g.
permessage-deflate
Extension) can reduce the amount of data transferred, but it also increases CPU utilization, and the right balance needs to be found. -
memory management: Optimizing memory usage, for example by reducing memory allocations and reusing buffers, can improve performance and reduce pressure on garbage collection.
var buffer = make([]byte, 0, 1024)
func readMessage(conn *) {
_, buffer, err := ()
if err != nil {
// process error
}
// utilizationbufferdata contained in
}
- Connection Pool Management: Effective connection pooling management can reduce the overhead of connection establishment and closure, especially in scenarios with long connections and frequent communication.
type WebSocketPool struct {
pool map[*]struct{}
}
func (p *WebSocketPool) Add(conn *) {
[conn] = struct{}{}
}
func (p *WebSocketPool) Remove(conn *) {
delete(, conn)
}
func (p *WebSocketPool) Broadcast(message []byte) {
for conn := range {
(, message)
}
}
- Locks and synchronization mechanisms: In a multithreaded or goroutine environment, sensible locking and synchronization mechanisms are necessary to avoid race conditions and deadlocks, but too much lock contention can degrade performance.
import "sync"
var pool = &WebSocketPool{
pool: make(map[*]struct{}),
}
var mu
func broadcast(message []byte) {
()
defer ()
for conn := range {
(, message)
}
}
- I/O Model: Using non-blocking I/O or asynchronous I/O models can improve performance because they allow other tasks to be performed while waiting for network data.
func handleConnection(conn *) {
go func() {
for {
_, message, err := ()
if err ! = nil {
return // Handle the error
}
// Handle incoming messages
}
}()
}
- protocol implementation: Accurate and efficient implementation of the WebSocket protocol, including the handling of frames, the addition and removal of masks, and the management of control frames, are all factors that affect performance.
func (c *Conn) genFrame(opcode Opcode, payload , isBroadcast bool) (*, error) {
if opcode == OpcodeText && !(.CheckUtf8Enabled, uint8(opcode)) {
return nil, (, ErrTextEncoding)
}
var n = ()
if n > {
return nil,
}
var buf = (n + frameHeaderSize)
(framePadding[0:])
if && () && n >= {
return (buf, opcode, payload, isBroadcast)
}
var header = frameHeader{}
headerLength, maskBytes := (, true, false, opcode, n)
_, _ = (buf)
var contents = ()
if ! {
(contents[frameHeaderSize:], maskBytes)
}
var m = frameHeaderSize - headerLength
copy(contents[m:], header[:headerLength])
(m)
return buf, nil
}
-
Error handling and recovery: Robust error handling and exception recovery mechanisms prevent problems with individual connections from affecting the performance of the entire service.
-
Testing and benchmarking: Identify performance bottlenecks through extensive testing and benchmarking, and optimize based on test results.