Netty is a high-performance, asynchronous event-driven network application framework , which is built on Java NIO , widely used in the Internet, big data, game development, communications industry and other fields . The following is a detailed introduction to Netty source code analysis, business scenarios:
Source Code Overview
- Netty's Core ComponentsNetty's architecture is designed around the core idea of being event-driven, including key concepts such as Channel, EventLoopGroup, ChannelHandlerContext, and ChannelPipeline.
- Channel: It is an abstract representation of a network connection. Each Channel has one or more ChannelHandlers to handle network events, such as connection establishment, data reception, and so on.
- EventLoopGroupThe ChannelHandler: is a collection of EventLoop, each EventLoop is responsible for handling a set of I/O events of the Channel. When the event of a Channel is triggered, the corresponding EventLoop will call the method in the ChannelHandler to process the event.
- ChannelPipeline: is an ordered collection of ChannelHandlers to handle incoming and outgoing data. By adding different Handlers to the Pipeline, complex business logic can be realized.
- Key processes in the source codeThe key processes to focus on in Netty's source code analysis include initialization, Channel registration, EventLoop workflow, and the connection establishment and binding process.
Netty provides an Echo example to demonstrate the basic communication flow between the client and the server side. In this example, a message sent by the client is received by the server and returned as is, demonstrating the basic way Netty handles network communication.
The following V brother to introduce these key core components in detail.
1. Channel components
Netty'sChannel
A component is one of the centers of the whole framework, it represents a connection in the network, either client-side or server-side.Channel
is a low-level interface for performing network I/O operations. The following is a description of theChannel
Source code analysis and interpretation of components:
Channel Interface Definition
Channel
The interface defines a set of methods for manipulating a network connection, such as bind, connect, read, write, and close.
public interface Channel extends AttributeMap {
/**
* Returns the {@link ChannelId} of this {@link Channel}.
*/
ChannelId id();
/**
* Returns the parent {@link Channel} of this channel. {@code null} if this is the top-level channel.
*/
Channel parent();
/**
* Returns the {@link ChannelConfig} of this channel.
*/
ChannelConfig config();
/**
* Returns the local address of this channel.
*/
SocketAddress localAddress();
/**
* Returns the remote address of this channel. {@code null} if the channel is not connected.
*/
SocketAddress remoteAddress();
/**
* Returns {@code true} if this channel is open and may be used.
*/
boolean isOpen();
/**
* Returns {@code true} if this channel is active and may be used for IO.
*/
boolean isActive();
/**
* Returns the {@link ChannelPipeline}.
*/
ChannelPipeline pipeline();
/**
* Returns the {@link ChannelFuture} which is fired once the channel is registered with its {@link EventLoop}.
*/
ChannelFuture whenRegistered();
/**
* Returns the {@link ChannelFuture} which is fired once the channel is deregistered from its {@link EventLoop}.
*/
ChannelFuture whenDeregistered();
/**
* Returns the {@link ChannelFuture} which is fired once the channel is closed.
*/
ChannelFuture whenClosed();
/**
* Register this channel to the given {@link EventLoop}.
*/
ChannelFuture register(EventLoop loop);
/**
* Bind and listen for incoming connections.
*/
ChannelFuture bind(SocketAddress localAddress);
/**
* Connect to the given remote address.
*/
ChannelFuture connect(SocketAddress remoteAddress, SocketAddress localAddress);
/**
* Disconnect if connected.
*/
ChannelFuture disconnect();
/**
* Close this channel.
*/
ChannelFuture close();
/**
* Deregister this channel from its {@link EventLoop}.
*/
ChannelFuture deregister();
/**
* Write the specified message to this channel.
*/
ChannelFuture write(Object msg);
/**
* Write the specified message to this channel and generate a {@link ChannelFuture} which is done
* when the message is written.
*/
ChannelFuture writeAndFlush(Object msg);
/**
* Flushes all pending messages.
*/
ChannelFuture flush();
// ... More method definitions
}
Channel's key methodology
-
id()
: BackChannel
The unique identifier of the -
parent()
: Return to parentChannel
If it's the topChannel
If you are a member of the group, you can return thenull
。 -
config()
: GetChannel
configuration information. -
localAddress()
cap (a poem)remoteAddress()
: Returns the local and remote addresses, respectively. -
isOpen()
cap (a poem)isActive()
:: Separate inspectionsChannel
Whether or not it is turned on and activated. -
pipeline()
:: Returns to the same level asChannel
associatedChannelPipeline
, which is a chain of processors that handle network events. -
register()
,bind()
,connect()
,disconnect()
,close()
,deregister()
: These methods are used to perform network I/O operations.
Channel Implementation Classes
Netty provides a wide range of network communication protocols for different types ofChannel
implementations, for example:
-
NioSocketChannel
: The TCP protocol for NIO transfers.Channel
Realization. -
NioServerSocketChannel
: TCP server side for NIO transfersChannel
Realization. -
OioSocketChannel
cap (a poem)OioServerSocketChannel
: Similar to NIO, but for blocking I/O.
The life cycle of a channel
-
establish:
Channel
created through its factory methods, usually in conjunction with a specificEventLoop
Associated. -
enrollment:
Channel
Must be registered toEventLoop
on it so that I/O events can be handled. -
Binding/Connection: Server-side
Channel
Bind to a specific address and start listening; the clientChannel
Connects to a remote address. -
Read and Write: By
Channel
Read and write data. -
cloture: Close
Channel
The release of relevant resources.
Channel Event Handling
Channel
The event handling is done through theChannelPipeline
cap (a poem)ChannelHandler
Done.ChannelPipeline
is a chain of processors that handles all I/O events and I/O operations. EachChannel
All have an associatedChannelPipeline
This can be accomplished byChannel
(used form a nominal expression)pipeline()
Method Access.
asynchronous processing
Channel
operations (e.g., bind, connect, write, close) are asynchronous and return aChannelFuture
object that allows developers to set up callbacks that execute when the operation completes or fails.
memory management
Netty'sChannel
The implementation also involves memory management, usingByteBuf
As a data container, it is a variable byte container that provides a set of operations to read and write network data.
wrap-up
Channel
is a core interface in Netty that defines the basic operations of network communication.Netty provides a variety ofChannel
implementations to support different I/O models and protocols. This is accomplished through theChannel
Netty enables high-performance, asynchronous and event-driven network communications.
2. EventLoopGroup component
EventLoopGroup
is a very important component of Netty that is responsible for managing a set ofEventLoop
EachEventLoop
Can handle multipleChannel
of the I/O events. The following is a description of theEventLoopGroup
Components are analyzed and explained in detail:
EventLoopGroup Interface Definition
EventLoopGroup
interface defines a set of interfaces that manageEventLoop
The following are some of the key ways to do this:
public interface EventLoopGroup extends ExecutorService {
/**
* Returns the next {@link EventLoop} this group will use to handle an event.
* This will either return an existing or a new instance depending on the implementation.
*/
EventLoop next();
/**
* Shuts down all {@link EventLoop}s and releases all resources.
*/
ChannelFuture shutdownGracefully();
/**
* Shuts down all {@link EventLoop}s and releases all resources.
*/
ChannelFuture shutdownGracefully(long quietPeriod, long timeout, TimeUnit unit);
/**
* Returns a copy of the list of all {@link EventLoop}s that are part of this group.
*/
List<EventLoop> eventLoops();
}
Key Methods of EventLoopGroup
-
next()
: Return to nextEventLoop
that is used to handle events. This can be an existingEventLoop
or a newly created instance, depending on the implementation. -
shutdownGracefully()
:: Elegantly closing allEventLoop
and release all resources. This method allows to specify a silent period and a timeout to wait for all tasks to complete before closing. -
eventLoops()
: Returns the currentEventLoopGroup
whether or notEventLoop
The list of.
EventLoopGroup implementation class
Netty provides severalEventLoopGroup
The realization of the main components:
-
DefaultEventLoopGroup
: DefaultEventLoopGroup
implementation, using theNioEventLoop
as itsEventLoop
Realization. -
EpollEventLoopGroup
: Linux-specificEventLoopGroup
implementation, using theEpollEventLoop
as itsEventLoop
implementation, which utilizes Linux'sepoll
mechanisms to improve performance. -
OioEventLoopGroup
: Blocking in I/O modeEventLoopGroup
implementation, using theOioEventLoop
as itsEventLoop
Realization.
How EventLoopGroup works
-
establish:
EventLoopGroup
Created through its constructor, the number of threads can be specified. -
enrollment:
Channel
Requires registration toEventLoop
in order toEventLoop
can handle its I/O events. -
event loop: Each
EventLoop
Runs an event loop in its thread that handles events registered to itsChannel
of the I/O events. -
cloture:
EventLoopGroup
It can be closed, releasing all resources.
Threading Model for EventLoopGroup
-
single-threaded model (computing): a
EventLoopGroup
Contains only oneEventLoop
, for small volume applications. -
multithreaded model: a
EventLoopGroup
Contains multipleEventLoop
EachEventLoop
Runs in a separate thread for highly concurrent applications.
Scenarios for EventLoopGroup
-
server-side: On the server side, it is common to use two
EventLoopGroup
. One for accepting connections (bossGroup
), one for handling connections (workerGroup
)。bossGroup
Usually fewer threads are used, and theworkerGroup
More concurrent connections can be handled as needed. -
client side: On the client side, usually only a
EventLoopGroup
, which is used to handle all connections.
sample code (computing)
Here's how to use theEventLoopGroup
The sample code for the
public class NettyServer {
public static void main(String[] args) {
EventLoopGroup bossGroup = new NioEventLoopGroup(1); // For accepting connections
EventLoopGroup workerGroup = new NioEventLoopGroup(); // For handling connections
try {
ServerBootstrap b = new ServerBootstrap();
(bossGroup, workerGroup)
.channel()
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ();
(new LoggingHandler());
(new MyServerHandler());
}
});
ChannelFuture f = (8080).sync(); // Bind the port and start the server
("Server started on port 8080");
().closeFuture().sync();
} catch (InterruptedException e) {
();
} finally {
();
();
}
}
}
In this example, thebossGroup
for accepting connections.workerGroup
for handling connections. The connection is handled through theServerBootstrap
class to configure the server and use theChannelInitializer
to set upChannel
of the processor chain.
summarize
EventLoopGroup
is the core component of Netty that manages the event loop via theEventLoop
Handles I/O events and supports highly concurrent and asynchronous operations. By rationally configuring theEventLoopGroup
The performance and scalability of web applications can be significantly improved.
3. ChannelPipeline component
ChannelPipeline
is a core component in Netty that manages a set ofChannelHandler
and defines how I/O events and operations flow between these processors. The following is a description of theChannelPipeline
Components are analyzed and explained in detail:
ChannelPipeline Interface Definition
ChannelPipeline
is an interface that defines the operationChannelHandler
The methodology:
public interface ChannelPipeline extends Iterable<ChannelHandler> {
/**
* Add the specified handler to the context of the current channel.
*/
void addLast(EventExecutorGroup executor, String name, ChannelHandler handler);
/**
* Add the specified handlers to the context of the current channel.
*/
void addLast(EventExecutorGroup executor, ChannelHandler... handlers);
// ... Omit others addFirst, addBefore, addAfter, remove, replace methodologies
/**
* Get the {@link ChannelHandler} by its name.
*/
ChannelHandler get(String name);
/**
* Find the first {@link ChannelHandler} in the {@link ChannelPipeline} that matches the specified class.
*/
ChannelHandler first();
/**
* Find the last {@link ChannelHandler} in the {@link ChannelPipeline} that matches the specified class.
*/
ChannelHandler last();
/**
* Returns the context object of the specified handler.
*/
ChannelHandlerContext context(ChannelHandler handler);
// ... an omission contextFor, remove, replace, fireChannelRegistered, fireChannelUnregistered 等methodologies
}
Key Methods of ChannelPipeline
-
addLast(String name, ChannelHandler handler)
: Add a new processor to the end of the pipeline and assign it a name. -
addFirst(String name, ChannelHandler handler)
: Add a new processor to the beginning of the pipeline. -
addBefore(String baseName, String name, ChannelHandler handler)
: Adds a new processor before the specified processor. -
addAfter(String baseName, String name, ChannelHandler handler)
: Adds a new processor after the specified processor. -
get(String name)
: Get by nameChannelHandler
。 -
first()
cap (a poem)last()
: Get the first and last processor in the pipeline, respectively. -
context(ChannelHandler handler)
: Get the context of the specified processor.
ChannelHandlerContext
ChannelHandlerContext
beChannelHandler
cap (a poem)ChannelPipeline
The bridge between the access and management ofChannel
、ChannelPipeline
cap (a poem)ChannelFuture
The ability of the
public interface ChannelHandlerContext extends AttributeMap, ResourceLeakHint {
/**
* Return the current channel to which this context is bound.
*/
Channel channel();
/**
* Return the current pipeline to which this context is bound.
*/
ChannelPipeline pipeline();
/**
* Return the name of the {@link ChannelHandler} which is represented by this context.
*/
String name();
/**
* Return the {@link ChannelHandler} which is represented by this context.
*/
ChannelHandler handler();
// ... Omission of other methods
}
How ChannelPipeline works
ChannelPipeline
maintains a two-way chained table ofChannelHandler
Set. EachChannel
instances all have an associatedChannelPipeline
. When an I/O event occurs, such as data being read into theChannel
The event is passed to theChannelPipeline
Then follow theChannelHandler
The order in which they are processed in the pipeline.
Processor execution order
-
inbound event: When data is read into the
Channel
When it does, events are passed from the tail to the head of the pipeline until a certainChannelHandler
Handling of the incident. - outbound incident: When data needs to be sent, events are passed from the head to the tail of the pipe until the data is written.
source code analysis
ChannelPipeline
implementation classDefaultChannelPipeline
Internally, it uses aChannelHandler
of a bi-directional linked table to maintain the order of the processors:
private final AbstractChannelHandlerContext head;
private final AbstractChannelHandlerContext tail;
private final List<ChannelHandler> handlers = new ArrayList<ChannelHandler>();
-
head
cap (a poem)tail
is the head and tail node of the chain table. -
handlers
is to store a list of all processors.
When adding a processor, theDefaultChannelPipeline
will update the chained tables and lists:
public void addLast(EventExecutorGroup executor, String name, ChannelHandler handler) {
if (handler == null) {
throw new NullPointerException("handler");
}
if (name == null) {
throw new NullPointerException("name");
}
AbstractChannelHandlerContext newCtx = new TailContext(this, executor, name, handler);
synchronized (this) {
if (tail == null) {
head = tail = newCtx;
} else {
= newCtx;
= tail;
tail = newCtx;
}
(newCtx);
}
}
wrap-up
ChannelPipeline
is the pipeline for handling network events and requests in Netty, and it does so by maintaining aChannelHandler
of a linked list to manage the flow of events. The flow of events is managed through theChannelHandlerContext
,ChannelHandler
Ability to access and modifyChannel
cap (a poem)ChannelPipeline
This design makes the event handling process highly customizable and flexible. This design makes the event handling process highly customizable and flexible, one of the key factors in Netty's high performance and ease of use.
4. Key processes in the source code
In Netty'sChannelPipeline
of the source code, the key processes involve the addition of processors, the triggering of events, and the flow of events between processors. Some of the key processes are analyzed below:
1. Processor additions
When creating theChannelPipeline
and ready to addChannelHandler
Netty allows developers to insert processors at the beginning, end, or specified location of the pipeline.
ChannelPipeline pipeline = ();
("myHandler", new MyChannelHandler());
existDefaultChannelPipeline
class, the processors are added to a bidirectional linked table, where each processor node (AbstractChannelHandlerContext
) holds references to the previous and subsequent processors.
2. Event loop and triggering
everyoneChannel
All with aEventLoop
RELATED.EventLoop
is responsible for handling all theChannel
of the event. When theEventLoop
At runtime, it loops continuously, waiting for and processing I/O events.
// EventLoop event loop
public void run() {
for (;;) {
// ...
processSelectedKeys();
// ...
}
}
3. Event capture and delivery
(coll.) fail (a student)EventLoop
When an I/O event is detected (e.g., data arrival), it triggers the appropriate action. ForChannelPipeline
This means that it is necessary to call the appropriateChannelHandler
Methods.
// Pseudo-code showing how the event is passed to the ChannelHandler
if (channelRead) {
(msg);
}
4. Handling of inbound and outbound incidents
-
inbound event(e.g., the data is read) usually from the
ChannelPipeline
starts passing at the end of the event, moving forward along the pipeline until some processor has handled the event. -
outbound incident(e.g., writing data) then from the
ChannelPipeline
of the header to start the pass, working backwards along the pipeline until the data is written out.
// Inbound event handling
public void channelRead(ChannelHandlerContext ctx, Object msg) {
// Process the message or pass it to the next handler
(msg);
}
// Outbound event handling
public void write(ChannelHandlerContext ctx, Object msg) {
// Write the message or pass it to the next handler
(msg);
}
5. Processor chain traversal
ChannelPipeline
It needs to be possible to traverse the processor chain in order to trigger events sequentially. This is typically accomplished by retrieving the events from theChannelHandlerContext
Get the next or previous processor to implement.
// Pseudo-code showing how to get the next handler and call it
ChannelHandlerContext nextCtx = ();
if (nextCtx ! = null) {
(msg);
}
6. Dynamic modification of the processor chain
During event processing, it may be necessary to dynamically modify the processor chain, such as adding a new processor or removing the current processor.
("newHandler", new AnotherChannelHandler());
(());
7. Resource management and clean-up
(coll.) fail (a student)Channel
When closed.ChannelPipeline
There is a need to ensure that allChannelHandler
Both are able to perform their cleanup logic and free up resources.
public void channelInactive(ChannelHandlerContext ctx) {
// Clearance Logic
}
8. Exception handling
If an exception is thrown during event handling, theChannelPipeline
Need to be able to catch and handle these exceptions appropriately to avoid impacting the entire pipeline.
try {
// Operations that may throw exceptions
} catch (Exception e) {
(e); }
}
wrap-up
ChannelPipeline
The source code of Netty contains several key flows that ensure that events are passed sequentially between processors, while providing the ability to dynamically modify processor chains and exception handling. Together, these flows form the basis of the event-driven network programming model in Netty.
business scenario
- microservices architecture: Netty can be used as the basis for an RPC framework that enables efficient communication between services.
- game serverNetty's asynchronous, non-blocking nature is ideal for building highly concurrent game servers because of the high latency and concurrency requirements of the gaming industry.
- Real-time communication system: Netty can be used to build real-time communication systems such as instant messaging, video conferencing, and other systems that require low-latency data transmission.
- IoT platform: Netty can serve as a communication bridge between devices and cloud platforms, handling large-scale device connections and data flows.
- internet business: In distributed systems, Netty is often used as a basic communication component by RPC frameworks, such as Ali's distributed service framework Dubbo, which uses Netty as its communication component.
- Big data area: Netty is also used in the network communication part of Big Data technologies, such as the RPC framework of Avro, the high-performance communication component of Hadoop.
ultimate
By deeply analyzing Netty's source code and understanding its application in different business scenarios, developers can better utilize this powerful network programming framework to build efficient, stable, and scalable network applications.