Location>code7788 >text

How Netty Automatically Detects the Occurrence of Memory Leaks

Popularity:8 ℃/2024-11-07 10:53:28

This article is based on Netty version 4.1.

This article is the final article in the Netty Memory Management series, after the first articleA Chat About the Design and Implementation of the Netty Data Mover ByteBuf Architecture. In the second article, I took UnpooledByteBuf as an example and analyzed the whole design system of ByteBuf from the periphery of the whole memory management, and then in the second article, I analyzed the whole design system of ByteBuf from the periphery of the whole memory management.Talking about Memory Management in Netty -- A look at Netty's implementation of Jemalloc for Java. In this article, I take you deeper into the inner workings of Netty's memory pools and disassemble the entire management of pooled memory in detail.

I don't know if you've noticed, but both allocations of non-pooled memory -- UnpooledByteBuf -- and pooled memory -- PooledByteBuf -- end up being wrapped by Netty in a LeakAwareBuffer.

public final class UnpooledByteBufAllocator {
    @Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
        final ByteBuf buf;
        if (()) {
            buf = noCleaner ? new InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf(this, initialCapacity, maxCapacity) :
                    new InstrumentedUnpooledUnsafeDirectByteBuf(this, initialCapacity, maxCapacity);
        } else {
            buf = new InstrumentedUnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
        }
        // Whether to start memory leak detection,If activated it is additionally activated with the LeakAwareByteBuf Packaging Returns
        return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
    }
}
public class PooledByteBufAllocator {
    // Thread Local Cache
    private final PoolThreadLocalCache threadCache;

    @Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
        // 获取Thread Local Cache,The first time a thread requests memory it will do so here with the PoolArena bind
        PoolThreadCache cache = ();
        // Gets the name of the thread bound to the current thread's PoolArena
        PoolArena<ByteBuffer> directArena = ;

        final ByteBuf buf;
        if (directArena != null) {
            // From fixed PoolArena Requesting Memory in the
            buf = (cache, initialCapacity, maxCapacity);
        } else {
            // Requesting Unpooled Memory
            buf = () ?
                    (this, initialCapacity, maxCapacity) :
                    new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
        }
        // If memory leak detection is turned on,failing agreement LeakAwareByteBuf wrap PooledByteBuf come (or go) back
        return toLeakAwareBuffer(buf);
    }
}

As I mentioned before, compared to JDK DirectByteBuffer which needs to rely on GC mechanism to release the Native Memory referenced behind it, Netty prefers to release DirectByteBuf manually in time. Because the JDK DirectByteBuffer needs to wait until GC occurs, it is difficult to trigger GC because the JVM heap memory occupied by the DirectByteBuffer's object instance is too small, which leads to a delay in releasing the referenced Native Memory, and in serious cases, it can accumulate more and more, leading to OOM. This will cause a delay in releasing the referenced Native Memory, and in serious cases, more and more will accumulate, resulting in OOM. This will also cause a very large delay in the process of requesting DirectByteBuffer.

Netty avoids this by manually releasing Native Memory after each use, but without relying on the JVM, there will always be memory leaks, such as forgetting to call release() on a ByteBuf after it's been used.

Although manual release is timely and controllable, it is prone to memory leaks, and Netty introduces the LeakAwareBuffer in order to deal with memory leaks, which, as you can see from its name, is designed to identify if there are any memory leaks in the ByteBuf it is wrapped in.

Now we are not very curious about this LeakAwareBuffer, what kind of magic it actually has, actually able to automatically detect memory leaks, but now we first LeakAwareBuffer thrown aside, do not care about it first, because it is just ByteBuf a simple shell, the real core behind the memory leaks are related to some of the detection model! design, so I decided to start from the core design principles to talk about ~~~~

image

1. Design Principles of Memory Leak Detection

First, let's look at the first core question, what kind of timing should we choose to detect memory leaks ?

Memory that is in use certainly can't be considered a leak, never mind how much memory I've consumed, but that memory is indeed in use, you can't say that I'm a memory leak can you. When I don't need that memory anymore but still continue to hold on to it without releasing it, that's a situation that we can only define as a memory leak.

So when memory is no longer being used, that's the time to detect memory leaks, and memory that is being used has no memory leaks at all, so naturally it doesn't need to be detected, so the next question is, how do we determine if a piece of memory is being used ? Or is it no longer in use?

That's gotta be GC! Right? When a DirectByteBuf doesn't have any strong or soft references, it's no longer in use and the GC will reclaim it. When it still has strong or soft references, it's still in use, and the GC won't reclaim it.

But memory leak detection is implemented outside of the JVM, which doesn't realize what we're trying to do, it just reclaims the DirectByteBuf and doesn't care if the Native Memory referenced behind the DirectByteBuf is leaking.

It looks like we can't rely on GC, but if we can get a JVM notification when DirectByteBuf is GC'd, and in that notification, trigger a memory leak detection, won't we be able to do it? How do we get this notification?

Do you remember the author'sHow the JVM implements Reference semantics, using ZGC as an example. The WeakReference and PhantomReference and FinalReference described in the article ? They can all get this notification.

For example, DirectByteBuffer in the JDK relies on the Cleaner mechanism to reclaim the Native Memory referenced behind it, and the Cleaner is a PhantomReference object.

public class Cleaner extends PhantomReference<Object>

image

Cleaner has a dummy reference to DirectByteBuffer, so when the DirectByteBuffer does not have any strong or soft references, that is, it will not be used anymore, and will be reclaimed by the GC, and at the same time, the JVM will put its dummy reference to Cleaner into a chain table called _ reference_pending_list.

The JVM then wakes up thread #1 in the JDK, the ReferenceHandler.

        Thread handler = new ReferenceHandler(tg, "Reference Handler");
        // Set the ReferenceHandler thread's priority to the highest priority level
        (Thread.MAX_PRIORITY); // Set the ReferenceHandler thread priority to the highest priority.
        (true).

The ReferenceHandler thread plucks all the Cleaners from the JVM's _reference_pending_list one by one, calling itsclean() method, which eventually frees the Native Memory in the Deallocator.

  private static class Deallocator implements Runnable {
        public void run() {
            // primitive call free to release native memory
            (address);
        }
  }

image

Another example is the PoolThreadCache, a thread-native cache in the Netty memory pool, which relies on the Finalizer mechanism to reclaim the pooled Native Memory behind the cache.

    private static final class FreeOnFinalize {
        // free PoolThreadCache
        private volatile PoolThreadCache cache;

        private FreeOnFinalize(PoolThreadCache cache) {
             = cache;
        }

        @Override
        protected void finalize() throws Throwable {
            try {
                ();
            } finally {
                PoolThreadCache cache = ;
                 = null;
                // (coll.) fail (a student) FreeOnFinalize When the instance is to be recycled,trig PoolThreadCache release
                if (cache != null) {
                    (true);
                }
            }
        }
    }

The main purpose of FreeOnFinalize is to reclaim the PoolThreadCache, internally rewriting thefinalize() method, the JVM creates a Finalizer object (of type FinalReference) for it, which references FreeOnFinalize, but this reference relationship is a FinalReference type.

final class Finalizer extends FinalReference<Object> {

    private static ReferenceQueue<Object> queue = new ReferenceQueue<>();

    private Finalizer(Object finalizee) {
        // Here. finalizee just like FreeOnFinalize boyfriend,(indicates passive-voice clauses) FinalReference quote
        super(finalizee, queue);
              ......
    }
}

image

Finalizer has a global ReferenceQueue, this ReferenceQueue is very important, because the JVM _reference_pending_list belongs to the JVM internal, in addition to the ReferenceHandler thread, the other ordinary Java threads can not access the So if we want to deal with these References outside the JVM (whose references have been recycled), we need to use an external queue, and this external queue is the ReferenceQueue in the Finalizer.

   Reference(T referent, ReferenceQueue<? super T> queue) {
        // FreeOnFinalize boyfriend
         = referent;
        // Finalizer hit the nail on the head ReferenceQueue an actual example(security situation)
         = (queue == null) ? : queue;
    }

When the thread terminates, the PoolThreadCache and FreeOnFinalize objects will be reclaimed by GC, but since FreeOnFinalize is referenced by a FinalReference (Finalizer), the JVM will revive the FreeOnFinalize object again, and since the FreeOnFinalize object also references the PoolThreadCache, the PoolThreadCache will also be revived. FreeOnFinalize object also references PoolThreadCache, so PoolThreadCache will be revived as well.

The JVM then puts this Finalizer (FinalReference object) into the internal _reference_pending_list, and then the ReferenceHandler thread takes the Finalizer objects off the _reference_pending_list one by one, and puts them into the ReferenceQueue.

Finally, FinalizerThread, thread #2 in the JDK, wakes up, picks the collected Finalizer objects one by one from the ReferenceQueue, and executes itsrunFinalizer method that ends up in the FreeOnFinalize object'sfinalize() method to free the PoolThreadCache.

        Thread finalizer = new FinalizerThread(tg);
        (Thread.MAX_PRIORITY - 2);
        (true);
        ();

image

The above are some example implementations for Native Memory recycling, and the same applies to Native Memory leak detection, which is triggered when DirectByteBuf is not in use, i.e., when it is GC'ed.

Netty here uses a WeakReference to get a notification that DirectByteBuf has been GC'd.

final class DefaultResourceLeak<T> extends WeakReference<Object>

image

As I mentioned earlier, _reference_pending_list is an internal JVM queue, if we want to handle DefaultResourceLeak outside the JVM, we have to pass in a global ReferenceQueue when we create DefaultResourceLeak. Netty's ReferenceQueue for memory leak detection is defined in ResourceLeakDetector.

public class ResourceLeakDetector<T> {
    private final ReferenceQueue<Object> refQueue = new ReferenceQueue<Object>();
}

With this ReferenceQueue, when DirectByteBuf doesn't have any strong or soft references in the system, there is only one weak reference left, DefaultResourceLeak, which will be reclaimed by the GC. WeakReference is the same as PhantomReference and FinalReference.

The JVM puts the DefaultResourceLeak into the internal _reference_pending_list, and the ReferenceHandler thread plucks the DefaultResourceLeak from the _reference_pending_list and puts it into the ReferenceQueue associated with it. The ReferenceQueue is the global refQueue defined in the ResourceLeakDetector and passed in when the DefaultResourceLeak object is created.

Once the DefaultResourceLeak object has been placed into the ReferenceQueue by the ReferenceHandler thread, the later processing is different from the previous one.

The Cleaner is handled directly by the ReferenceHandler thread, the Finalizer is handled by the FinalizerThread thread, so which thread should handle the DefaultResourceLeak here? This is the second core problem we face.

Cleaner and Finalizer are both internal to the JDK, so the JDK has a dedicated daemon to handle them, while DefaultResourceLeak is a memory leak detection mechanism Netty implements outside the JDK, so it's not possible for Netty to dedicate a daemon to memory leak detection, nor is it necessary. Netty can't have a dedicated daemon thread to handle memory leak detection, nor does it need one.

In fact, any thread in Netty can handle DefaultResourceLeak because memory allocation is a very frequent operation, and it's fine to detect memory leaks when allocating memory, so there's no need to dedicate a thread to detecting memory leaks. It's not necessary to have a thread dedicated to detecting memory leaks. Not only does it consume fewer resources, but it also detects memory leaks faster and in a more timely manner.

When a thread calls the ByteBufAllocator to request memory, Netty triggers a check on the ReferenceQueue, and if the queue contains a DefaultResourceLeak, it takes it down to check for memory leaks. So what do we use to determine if a DirectByteBuf has a memory leak? This is the third core problem we face.

Netty maintains a reference count -- refCnt -- for each ByteBuf.

public abstract class AbstractReferenceCountedByteBuf extends AbstractByteBuf {
   // reference count
   private volatile int refCnt;
}

We can do this through therefCnt() method to get the current reference count refCnt of ByteBuf. When ByteBuf is referenced in other contexts, we need to pass theretain() method adds 1 to the reference count of the ByteBuf, and whenever we run out of ByteBufs we need to manually call therelease() method decrements the reference count of the ByteBuf by one. When the reference count refCnt reaches 0, Netty passes thedeallocate method to free the Native Memory referenced by ByteBuf.

public interface ReferenceCounted {
     int refCnt();
     ReferenceCounted retain();
     boolean release();
}

So it is easy to think that we can do something about this reference count refCnt, when a DirectByteBuf is GC'd, if its reference count is 0, it means that the Native Memory it references has been released in time, and there is no memory leak. If its reference count is not 0, then the Native Memory referenced behind it has not been freed, and a memory leak has occurred.

That's a good idea, but unfortunately, we can't get DirectByteBuf anymore, and we can't get its reference count because it's been GC'd, and we can only get the weakly referenced DefaultResourceLeak associated with DirectByteBuf from the ReferenceQueue. So what should we do?

The fundamental basis for us to determine whether a DirectByteBuf has a memory leak is still to see whether its reference count is 0, but now that the DirectByteBuf has been GC'd, its reference count can't be retrieved, but we can still realize this layer of semantics in the other dimension of "is the reference count 0? " semantics in another dimension - to save the day.

How can this be done? Let's go back to the Cleaner and Finalizer mechanisms for inspiration. Inside the Cleaner there is a global bi-directional chained table -- first.

public class Cleaner extends PhantomReference<Object>
{
    private static Cleaner first = null;

    private Cleaner next = null, prev = null;
}

image

Whenever a Cleaner object is created, the JDK inserts the new Cleaner object into the bi-directional table using header insertion.

The purpose of doing this is so that these Cleaner objects in the system are always associated with a GcRoot, always maintaining a chain of strong references.

This ensures that the DirectByteBuffer object that is falsely referenced by the Cleaner object, whether before or after it is recycled by the GC, the Cleaner object associated with it will always remain active and will not be recycled by the GC, because we ultimately have to rely on the Cleaner object to release native memory.

Similarly, in order to ensure that these Finalizers execute the finalizee object'sfinalize() method will not be reclaimed by GC until it is. Finalizer also has an internal two-way chained table -- unfinalized -- that is used to strongly reference all Finalizer objects in the JVM heap.

final class Finalizer extends FinalReference<Object> {
    // bidirectional linked list,save (a file etc) (computing) JVM All of the Finalizer boyfriend,take precautions Finalizer (indicates passive-voice clauses) GC lose (value, weight etc)
    private static Finalizer unfinalized = null;
    private Finalizer next, prev;
}

image

In the same vein, Netty needs to globally hold a strong reference to DefaultResourceLeak somewhere in order to ensure that the weakly referenced DefaultResourceLeak remains active until DirectByteBuf is GC'd.

But unlike the Cleaner and Finalizer, Netty doesn't use a bi-directional linked list structure to hold a strong reference to DefaultResourceLeak, instead it uses a Set structure.

public class ResourceLeakDetector<T> {
    private final Set<DefaultResourceLeak<?>> allLeaks =
            (new ConcurrentHashMap<DefaultResourceLeak<?>, Boolean>());
}

image

The reason for using the Set structure here is to realize the semantics of "is the reference count 0", so how do we do it?

When Netty allocates a DirectByteBuf, it also creates a DefaultResourceLeak object that weakly references the DirectByteBuf, and then places the DefaultResourceLeak object in the allLeaks collection.

When we're done using DirectByteBuf and calling therelease() method releases its Native Memory, Netty removes its DefaultResourceLeak object from the allLeaks collection if it has a reference count of zero.

If we finish using DirectByteBuf and forget to call therelease() method, then its reference count will always be greater than 0, which also means that its corresponding DefaultResourceLeak object will always stay in the allLeaks collection.

On the other hand, as long as a DefaultResourceLeak object stays in the allLeaks set, the reference count of the DirectByteBuf that is weakly referenced by it must be greater than zero.

After this DirectByteBuf is reclaimed by the GC, the JVM inserts its corresponding DefaultResourceLeak into the _reference_pending_list, and the ReferenceHandler thread then transfers the DefaultResourceLeak object once again from the _ reference_pending_list to the ReferenceQueue.

When an ordinary Java thread requests a DirectByteBuf from Netty, the requesting thread will look in the ReferenceQueue to see if there is a DefaultResourceLeak object, and if there is, the weakly-referenced DirectByteBuf has already been GC'd. If there is, then it proves that the weakly referenced DirectByteBuf has been GC'd.

Immediately after that, it looks to see if the DefaultResourceLeak object is still in the allLeaks collection, and if it is, then the Native Memory behind DirectByteBuf is still not freed, and Netty has detected a memory leak.

Well, now we have a clear understanding of the core design principles of Netty memory leak detection, then the following content is very simple, we will switch the perspective from the internal memory leak detection in the switch to the external, standing in the application's point of view and then from the overall complete look at the entire memory leak detection mechanism.

2. Netty's memory leak detection mechanism

In general terms, the following five conditions need to be met simultaneously to trigger the detection of a memory leak:

  1. The application must have memory leak detection turned on.

  2. The memory leak can only be detected after the ByteBuf has been GC'd. If the GC has not been triggered, then even if the ByteBuf does not have any strong references or soft references, there is no way to detect the memory leak.

  3. When GC occurs, it must be until the next allocation of memory that memory leak detection is triggered. If no memory request occurs, then no memory leak detection occurs.

  4. Netty does not detect leaks for every ByteBuf, but rather samples them based on a certain sampling interval. So to trigger memory leak detection, you need to reach a certain sampling interval.

  5. The logging level of the application must be turned onError level, as memory leaks are reported, Netty is reporting them as aError level is printed out, if the log level is in theError below, then the memory leak report is not output.

In addition to this, Netty has four levels for memory leak detection:

    public enum Level {
        DISABLED,
        SIMPLE,
        ADVANCED,
        PARANOID;
    }

We can pass the JVM parameter- Set different detection levels for the application, whereDISABLED indicates that memory leak detection is disabled, because when memory leak detection is enabled, the application's access link to the ByteBuf will become longer, and Netty needs to record the ByteBuf creation location stack and the access link stack, so that in the memory leak report, we can clearly know where the leaked ByteBuf was created, where it was leaked, and what its access paths.

image

Each stack in the report occupies 2K size in memory, so the memory consumption is still very considerable, so I generally recommend that in the production environment, Netty's memory leak detection should be turned off. In a test environment, memory leak detection is still enabled.

When memory leak detection is turned on, Netty provides us with three different detection levels, the higher the level, the greater the consumption and the more detailed the information. The first detection level isSIMPLE , which is also Netty's default probe level.

SIMPLE level, Netty does not detect leaks on a per ByteBuf basis, but instead chooses to sample them, with a default sampling interval of 128.

public class ResourceLeakDetector<T> {
  // sampling interval,default (setting) 128
  static final int SAMPLING_INTERVAL;

  private static final String PROP_SAMPLING_INTERVAL = "";

  private static final int DEFAULT_SAMPLING_INTERVAL = 128;

  SAMPLING_INTERVAL = (PROP_SAMPLING_INTERVAL, DEFAULT_SAMPLING_INTERVAL);
}

We can pass the JVM parameter- to set the sampling interval for memory leak detection. So how does Netty decide which specific ByteBuf to detect a memory leak for based on this sampling interval?

In fact, the implementation of this probing frequency is quite simple; after each memory request, Netty generates the[ 0 , samplingInterval ) If the random number is 0, Netty will detect the memory leak for the requested ByteBuf, if the random number is not 0, Netty will give up the detection.

().nextInt(samplingInterval) == 0

Effectively, Netty triggers a memory leak detection for every samplingInterval ByteBuf requested.

In addition to being limited by this frequency of adoption, theSIMPLE The memory leak report at this level is minimal, and will only include the location where the ByteBuf was created, and Netty will not keep track of the stack information for later accesses to the ByteBuf, which is what is in the log.Recent access records: information in theSIMPLE It is not available under the level.

image

ADVANCED grade andSIMPLE level, at both probing levels Netty chooses to do sample probing instead of probing for every ByteBuf, again subject to sampling frequency limitations.

in that caseADVANCED exactlySIMPLE What's so great about it?SIMPLE level will only report where the leaked ByteBuf was created.ADVANCED In addition to where the ByteBuf was created, the level also tracks every access to the stack of the ByteBuf, which is what is shown in the memory leak report log below.Recent access records Related information.

image

As I mentioned earlier, tracing the access stack of ByteBuf is a very considerable memory consumption, for each access stack of ByteBuf, if you want to record, each stack takes up 2K of memory, stack information Netty will be recorded in a TraceRecord structure.

If a ByteBuf is accessed multiple times, it corresponds to multiple TraceRecord structures. These TraceRecords for the ByteBuf are organized by Netty in a stack structure that corresponds to the DefaultResourceLeak, and the TraceRecord at the bottom of the stack records the ByteBuf's The TraceRecord at the bottom of the stack records ByteBuf's creation stack, and the TraceRecord at the top of the stack records ByteBuf's most recently accessed stack.

private static final class DefaultResourceLeak<T> {
    // top-of-stack pointer
    private volatile TraceRecord head; // stack structure (computing),storage counterpart ByteBuf access stack
}

private static class TraceRecord extends Throwable {
  // the bottom of a stack
  private static final TraceRecord BOTTOM = new TraceRecord()
}

Since each TraceRecord takes up 2K of memory to record access stack information, it is not possible for Netty to record stack information for every access to the ByteBuf, no matter what the probing level is, so the number of TraceRecords in the DefaultResourceLeak stack should be limited. The default maximum number of TraceRecords in the stack is 4, and we can limit the number of TraceRecords in the DefaultResourceLeak stack by using the- The parameters are adjusted.

public class ResourceLeakDetector<T> {
    // ByteBuf Limit on number of stack records accessed,default 4
    private static final int TARGET_RECORDS;

    private static final String PROP_TARGET_RECORDS = "";

    private static final int DEFAULT_TARGET_RECORDS = 4;

    TARGET_RECORDS = (PROP_TARGET_RECORDS, DEFAULT_TARGET_RECORDS);
}

But to be more precise, targetRecords is just a limit on the number of TraceRecords in the stack, so that it doesn't grow indefinitely, but it doesn't limit it to death. In fact, there is a chance that the number of TraceRecords in the stack will exceed the targetRecords limit.

For example, the default value of targetRecords is 4. If we limit the number of TraceRecords in the stack to 4, when a ByteBuf has a very long access link, then the stack can only record the first three farthest TraceRecords and the closest TraceRecord, and the intermediate accesses to the stack will be lost. This is not conducive to checking the complete leakage path of the ByteBuf.

In fact, the real semantics of targetRecords is that when the number of TraceRecords on the ByteBuf's access stack reaches the targetRecords limit, Netty discards the current top-of-stack TraceRecord with a certain probability and puts a new TraceRecord as the top of the stack. The probability of discarding is very high, which prevents the number of TraceRecords from growing wildly.

But if the probability of not discarding is hit (very low), then the original TraceRecord on top of the stack will not be discarded but will remain on the stack, and the new TraceRecord will be added to the stack as the top of the stack, so the number of TraceRecords on the stack exceeds the limit of targetRecords. But you can keep as many access stack records as possible in the middle of the ByteBuf. This makes the leakage path of ByteBuf more complete.

PARANOID is the highest level of Netty memory leak detection, the most informative and the most consuming, and it's in theADVANCED On top of that, it bypasses the sampling frequency limitation and performs detailed leak detection for each ByteBuf. This is usually turned on when you need to locate an urgent memory leak in a test environment.

3. Design models related to memory leak detection

Now that we have a clear understanding of the design principles of memory leak detection and related applications, it is time to formally introduce the implementation details in this subsection. Netty has designed a total of four detection models, with different models encapsulating different detection responsibilities.

3.1 ResourceLeakDetector

The first model is ResourceLeakDetector. As the name suggests, it is mainly responsible for memory leak detection, and the principle implementation introduced in the first subsection is done in this model.

public class ResourceLeakDetector<T> {
    // Detection level
    private static Level level;
    // free ByteBuf Corresponding weak references DefaultResourceLeak set (mathematics)
    private final Set<DefaultResourceLeak<?>> allLeaks =
            (new ConcurrentHashMap<DefaultResourceLeak<?>, Boolean>());
    // For receiving ByteBuf Notification of recall
    private final ReferenceQueue<Object> refQueue = new ReferenceQueue<Object>();
    // Type of resource detected,Here it is. ByteBuf
    private final String resourceType;
    // sampling interval
    private final int samplingInterval;
    // Memory Leak Listener,Once a memory leak is detected,Netty It'll pull back. LeakListener
    private volatile LeakListener leakListener;
}

The ResourceLeakDetector encapsulates all the information needed for memory leak detection, the most important of which are the allLeaks and refQueue collections. allLeaks is used to hold a weak reference to the DefaultResourceLeak for all unreleased ByteBufs. After the ByteBuf is created, Netty creates a DefaultResourceLeak instance to weakly reference the ByteBuf, and this DefaultResourceLeak is added to allLeaks.

If the application releases the ByteBuf in time, the corresponding DefaultResourceLeak will also be removed from allLeaks. If the corresponding DefaultResourceLeak is still in allLeaks after the ByteBuf is GC'd, it means that the ByteBuf is leaking. If the corresponding DefaultResourceLeak still remains in allLeaks after the ByteBuf is GC'd, then the ByteBuf has been leaked.

image

The refQueue is mainly used to collect the weak reference DefaultResourceLeak corresponding to a GC'd ByteBuf. When a ByteBuf is GC'd, its corresponding DefaultResourceLeak is put into an internal _reference_pending_list by the JVM, and the ReferenceHandler thread is woken up to transfer the DefaultResourceLeak from the _reference_pending_list to the refQueue. When a ByteBuf is GC'd, its corresponding DefaultResourceLeak is put into an internal _reference_pending_list by the JVM, and the ReferenceHandler thread is woken up to transfer the DefaultResourceLeak from the _reference_pending_list to the refQueue here.

image

The ResourceLeakDetector then plucks the DefaultResourceLeak from the refQueue, and checks if the DefaultResourceLeak is still in the allLeaks collection. If it does, the corresponding ByteBuf has been leaked, and then the leak path is summarized in the form of aERROR level log printout.

In addition to this, Netty provides a memory leak listener that allows us to implement autonomous logic to handle memory leaks after they occur.

    public interface LeakListener {

        /**
         * Will be called once a leak is detected.
         */
        void onLeak(String resourceType, String records);
    }

We can do this through the method to register a LeakListener with the ResourceLeakDetector.

public final class ByteBufUtil {

    public static void setLeakListener( leakListener) {
        (leakListener);
    }
}

As soon as the ResourceLeakDetector detects the occurrence of a memory leak, Netty calls back the LeakListener we registered.

Netty will only have one instance of ResourceLeakDetector globally, referenced by the AbstractByteBuf static field leakDetector.

public abstract class AbstractByteBuf extends ByteBuf {
    // security situation ResourceLeakDetector an actual example
    static final ResourceLeakDetector<ByteBuf> leakDetector =
            ().newResourceLeakDetector();
}

The default implementation of the memory leak detector is ResourceLeakDetector, but we can also customize the implementation of the memory leak detector by inheriting the ResourceLeakDetector class, overriding the implementation of the relevant core detector methods, and finally passing the JVM parameter-={className} Just specify.

There is no more central method to the ResourceLeakDetector than thetrack(T obj) cap (a poem)reportLeak() Both methods.

public class ResourceLeakDetector<T> {

    public final ResourceLeakTracker<T> track(T obj) {
        return track0(obj, false);
    }

    // sampling frequency,default (setting) 128
    private final int samplingInterval;
    // treat (sb a certain way) obj Perform resource leakage detection
    // force Indicates whether probing is mandatory
    private DefaultResourceLeak track0(T obj, boolean force) {
        Level level = ;
        if (force ||
                level == ||
                (level != && ().nextInt(samplingInterval) == 0)) {
            // Trigger memory leak detection,If a memory leak occurs,Then in the log report
            reportLeak();
            // establish ByteBuf (obj) treat (sb a certain way)应的弱引用 DefaultResourceLeak
            // ResourceLeakDetector Global refQueue , allLeaks It will be registered here.
            return new DefaultResourceLeak(obj, refQueue, allLeaks, getInitialHint(resourceType));
        }
        return null;
    }
}

The track method is used to trigger the detection of memory leaks, here is the implementation of the content in the second subsection, if we set the level of memory leak detection for thePARANOID , then Netty does a full probe of all the ByteBufs in the system, and the report log after a memory leak occurs will contain a detailed stack path to the leak.

If the memory leak detection level isSIMPLE orADVANCED , then Netty samples and probes the ByteBufs in the system, with the sample intervalSAMPLING_INTERVAL = 128 , we can pass the JVM parameter- Make settings.

The sampling logic is that Netty generates a random number between [ 0 , samplingInterval ), if the random number is 0, then memory leak detection is performed, if the random number is not 0, then the detection is abandoned. Effectively, Netty triggers a memory leak detection for every samplingInterval ByteBuf requested.

().nextInt(samplingInterval) == 0

After the conditions for detecting a memory leak are met, Netty willreportLeak() method to detect memory leaks, and if a memory leak occurs, then the access path to the leaked ByteBuf will be returned asERROR The log level printout of the

Since the logging level for memory leaks isERROR , then before we can do memory leak detection, we must first check to see if the user has turned on theERROR Log Level.

    protected boolean needReport() {
        return ();
    }

If the user selects a lower logging level, then even if a memory leak occurs, the associated ERROR log will not be printed, in which case memory leak detection is not necessary.Netty calls theclearRefQueue() method, which empties all DefaultResourceLeak instances collected in the refQueue and removes DefaultResourceLeak from the allLeaks collection.

    private void clearRefQueue() {
        for (;;) {
            // empty refQueue
            DefaultResourceLeak ref = (DefaultResourceLeak) ();
            if (ref == null) {
                break;
            }
            // commander-in-chief (military) DefaultResourceLeak through (a gap) allLeaks Delete in set。
            ();
        }
    }

If the user's log level selection isERROR Netty then continues the memory leak detection process, first if a ByteBuf is reclaimed by GC, the DefaultResourceLeak associated with its weak reference is moved to the refQueue by the ReferenceHandler thread.

This means that all DefaultResourceLeak's corresponding ByteBufs that are currently held in the refQueue have already been reclaimed by the GC, and it is these reclaimed ByteBufs that are the target of memory leak detection.

Netty takes these collected DefaultResourceLeak off the refQueue one by one.

DefaultResourceLeak ref = (DefaultResourceLeak) ();

then calldispose() method checks if the DefaultResourceLeak instance is still stuck in the allLeaks collection.

        boolean dispose() {
            // Disconnect DefaultResourceLeak from ByteBuf.
            clear();
            // Check if the DefaultResourceLeak instance still exists in the allLeaks collection.
            return (this); // Check if the DefaultResourceLeak instance still exists in the allLeaks collection.
        }

If it still stays in allLeaks, then there is a memory leak in the ByteBuf corresponding to that instance of DefaultResourceLeak. After detecting the memory leak, call thegetReportAndClearRecords() method to get the ByteBuf-related access stack path, and then pass thereportTracedLeak method takes the leak path of the ByteBuf as aERROR level logs are printed out and finally the memory leak listener LeakListener is called back.

    // resourceType is the type of resource to be detected,Here it is. ByteBuf
    // records It is the memory leak that occurs ByteBuf Related Access Stacks
    protected void reportTracedLeak(String resourceType, String records) {
        (
                "LEAK: {}.release() was not called before it's garbage-collected. " +
                "See /wiki/ for more information.{}",
                resourceType, records);
    }

reportLeak() The implementation logic of the methods is exactly all that the author presented in the first subsection:

    private void reportLeak() {
        // The log level must be Error
        if (!needReport()) {
            clearRefQueue(); }
            return; }
        }

        // Detect and report previous leaks.
        for (;;) {
            // The corresponding ByteBuf must have been reclaimed by the GC to trigger detection of a memory leak.
            DefaultResourceLeak ref = (DefaultResourceLeak) (); if (ref == null) { // Detect and report previous leaks.
            if (ref == null) {
                break; }
            }
            // Check if the DefaultResourceLeak corresponding to ByteBuf is still in the allLeaks collection
            if (! ()) {
                // If not, then ByteBuf has been freed in time and there is no memory leak.
                continue; }
            }
            // When a memory leak is detected for ByteBuf, the access stack for ByteBuf is fetched here.
            String records = (); if ((records)) { // // // ByteBuf has a memory leak.
            if ((records)) { // De-duplicate the leak logs.
                // Print the stack path of the leak
                if (()) {
                    reportUntracedLeak(resourceType);
                } else {
                    reportTracedLeak(resourceType, records);
                }
                // Callback LeakListener
                LeakListener listener = leakListener; if (listener !
                if (listener ! = null) {
                    (resourceType, records); }
                }
            }
        }
    }

3.2 ResourceLeakTracker

The ResourceLeakDetector introduced in the previous subsection is only responsible for detecting memory leaks, but if a memory leak is detected, where does the information about the leak path come from? How does Netty collect it? This brings us to the second detection model -- ResourceLeakTracker.

Netty's default implementation of ResourceLeakTracker is DefaultResourceLeak, which is a WeakReference that is used by Netty to weakly reference an associated ByteBuf in order to receive notification that the ByteBuf has been reclaimed by the GC, and thus to determine whether a memory leak has occurred.

image

In addition, another important responsibility of ResourceLeakTracker is to collect the access link stack of ByteBuf, once the ByteBuf leaks, ResourceLeakDetector will get the relevant leak stack from ResourceLeakTracker --getReportAndClearRecords() method, and print it out in the log.

For each ByteBuf, Netty encapsulates the access link stack information in a TraceRecord structure. If a ByteBuf has more than one access link, there are more than one TraceRecords in its ResourceLeakTracker structure. Netty organizes these TraceRecords in a stack structure.

image

    private static final class DefaultResourceLeak<T>
            extends WeakReference<Object> implements ResourceLeakTracker<T>, ResourceLeak {
        // top-of-stack pointer
        private volatile TraceRecord head;
        // stack TraceRecord number of individuals
        private volatile int droppedRecords;
        // directional ResourceLeakDetector Global allLeaks
        private final Set<DefaultResourceLeak<?>> allLeaks;
        // tracked Bytebuf (used form a nominal expression) hash (be) worth
        private final int trackedHash;
    }

When Netty allocates a new ByteBuf, if it matches the If the detection condition in the

Regardless of the probing level, DefaultResourceLeak keeps at least one TraceRecord, which is used to save the ByteBuf's creation location on the stack, and is added to the bottom of the stack when DefaultResourceLeak is built.

image

        DefaultResourceLeak(
                Object referent,
                ReferenceQueue<Object> refQueue,
                Set<DefaultResourceLeak<? >> allLeaks,
                Object initialHint) {
            // weakly referenced association ByteBuf (referent)
            // Register the refQueue
            super(referent, refQueue); // Save the Bytebuf's hash.
            // Save the hash value of the Bytebuf.
            trackedHash = (referent); // Add to allLeaks.
            // Add to allLeaks, if the DefaultResourceLeak remains in allLeaks after the ByteBuf is reclaimed, then a memory leak occurred.
            (this); // Create the first TraceRecord.
            // Create the first TraceRecord of where the ByteBuf was created on the stack, and save it to the bottom of the stack.
            (this, initialHint == null ?
                    new TraceRecord() : new TraceRecord(, initialHint)); // Create the first TraceRecord to record where the ByteBuf was created on the stack, saved at the bottom of the stack.
             = allLeaks.
        }

Additionally we can pass therecord method to add the current access stack of ByteBuf to DefaultResourceLeak.

        @Override
        public void record() {
            record0(null);
        }

        @Override
        public void record(Object hint) {
            record0(hint);
        }

pass (a bill or inspection etc)record(Object hint) The added stacks will appear in the leak log with our customized alert message.

image

by means ofrecord() The added stacks do not have this alert message in the leak log.

image

The logic of adding a new TraceRecord to DefaultResourceLeak is also very simple, it is just to add the latest accessed stack information of ByteBuf -- TraceRecord to the stack. However, you can't add TraceRecords to the stack indefinitely.

As I mentioned in the second subsection, the access stack information recorded in each TraceRecord takes up 2K of memory, and it is impossible for Netty to record stack information for every access to ByteBuf, so the number of stacks in the DefaultResourceLeak stack is limited by TARGET_RECORDS, which defaults to 4, and can be changed by the- The parameters are adjusted.

When the number of TraceRecords recorded in the DefaultResourceLeak stack reaches the TARGET_RECORDS limit, Netty will, with a certain (high) probability, discard the current top-of-stack TraceRecord and put a new one as the top of the stack. This prevents the number of TraceRecords from growing wildly.

But if it happens to hit the probability of not discarding (very low), then the original top-of-stack TraceRecord will not be discarded but will remain on the stack, and the new TraceRecord will be added to the stack as the top of the stack, so the number of TraceRecords on the stack exceeds the limit of TARGET_RECORDS. But you can keep as many access stack records as possible in the middle of the ByteBuf. This makes the leakage path of ByteBuf more complete.

The logic for calculating the discard probability is also simple; Netty still calculates the probability by computing a [ 0 , 1 << backOffFactor ) If this random number is not 0, then the current top element of the stack is discarded. So it seems that when the number of TraceRecords in the DefaultResourceLeak stack reaches the limit of TARGET_RECORDS, the probability that the top element of the stack will be discarded is still very high if TraceRecords continue to be added.

// numElements is the current stack TraceRecord number of individuals
final int backOffFactor = (numElements - TARGET_RECORDS, 30)
dropped = ().nextInt(1 << backOffFactor) != 0

The complete stack entry logic for TraceRecord is as follows:

        private void record0(Object hint) {
            if (TARGET_RECORDS > 0) {
                TraceRecord oldHead; TraceRecord prevHead; TraceRecord prevHead
                TraceRecord prevHead; TraceRecord newHead; TraceRecord newHead; TraceRecord newHead
                TraceRecord oldHead; TraceRecord prevHead; TraceRecord newHead; boolean dropped; TraceRecord newHead
                boolean dropped.
                do {
                    // Get the top TraceRecord, which is ByteBuf's most recent access to the stack.
                    if ((prevHead = oldHead = (this)) == null) {
                        // A null top of stack means that ByteBuf has been freed and leak detection has been turned off.
                        return;
                    }
                    // Get the number of TraceRecords currently on the stack.
                    final int numElements = + 1; // If TARGET_Reader is reached, then the stack is empty.
                    // If the TARGET_RECORDS limit is reached, start probabilistically discarding the top of the stack.
                    // then use the new TraceRecord as the top of the stack
                    if (numElements >= TARGET_RECORDS) {
                        final int backOffFactor = (numElements - TARGET_RECORDS, 30); // numElements exceeds TARGET_RECORDS.
                        // The more numElements exceed the TARGET_RECORDS limit, the more likely the top of the stack is to be dropped.
                        if (dropped = ().nextInt(1 << backOffFactor) ! = 0) {
                            // Drop the current top-of-stack TraceRecord if it hits the drop probability.
                            prevHead = ;
                        }
                    } else {
                        // Keep the current top of the stack so that the number of TraceRecords on the stack exceeds the TARGET_RECORDS limit.
                        // but the access link stack in the middle of the ByteBuf is probabilistically preserved
                        dropped = false; }
                    }
                    // A new TraceRecord is created (recording the current access stack of the ByteBuf)
                    // and is used as the new top element of the stack
                    newHead = hint ! = null ? new TraceRecord(prevHead, hint) : new TraceRecord(prevHead); }
                } while (! (this, oldHead, newHead));

                if (dropped) {
                     // Count the number of TraceRecords that have been dropped.
                    (this); }
                }
            }
        }

Okay, now that we have a clear picture of how Netty collects ByteBuf-related access link stack information via DefaultResourceLeak, how does Netty generate a leaky stack when a memory leak occurs in this ByteBuf?

This relies on the TraceRecord stack structure in DefaultResourceLeak, where the top TraceRecord always holds the ByteBuf's most recent access stack, the bottom TraceRecord always holds the ByteBuf's start location stack, and the middle TraceRecord records the ByteBuf's access link stack. TraceRecord in the middle records the access link stack of ByteBuf.

image

The leak stack of ByteBuf is printed from the TraceRecord at the top of the stack all the way to the TraceRecord at the bottom of the stack, i.e., the leak path of ByteBuf is output from near to far.

        String getReportAndClearRecords() {
            // Get the top of the stack TraceRecord
            TraceRecord oldHead = (this, null);
            // away from the near and towards the far output ByteBuf pertinent TraceRecords
            return generateReport(oldHead);
        }

First Netty prints a lineRecent access records: , and then each TraceRecord in the log has a# word number, and the TraceRecord number at the top of the stack is#1 , followed by the next increment, the TraceRecord at the bottom of the stack, because it is recording the creation location stack, Netty will indicate in the logs that theCreated at:

image

        private String generateReport(TraceRecord oldHead) {
            // be facing (us) DefaultResourceLeak How many total stacks are there in the TraceRecord
            int present = + 1;
            // everyone TraceRecord allocate 2K Size of memory
            StringBuilder buf = new StringBuilder(present * 2048).append(NEWLINE);
            ("Recent access records: ").append(NEWLINE);
            int i = 1;
            // anti-recombination (physics)
            Set<String> seen = new HashSet<String>(present);
            // Generate a leaky stack from the top of the stack
            for (; oldHead != ; oldHead = ) {
                // gain TraceRecord Recorded stack information
                String s = ();
                if ((s)) {
                    if ( == ) {
                        // the bottom of a stack TraceRecord recorded Buffer The location of the creation of the
                        ("Created at:").append(NEWLINE).append(s);
                    } else {
                        ('#').append(i++).append(':').append(NEWLINE).append(s);
                    }
                } else {
                    // repetitive TraceRecord number of individuals
                    duped++;
                }
            }
            // Generating a Leakage Stack,and returns
            (() - ());
            return ();
        }

3.3 TraceRecord

How is each access stack that appears in the above memory leak log generated? This brings us to the third model, the TraceRecord, which is used in memory leak detection to record the access stack of a particular ByteBuf. The implementation is very simple, you just need to inherit Throwable, so that every time you create a TraceRecord, it will automatically generate the current access stack of ByteBuf.

Since the TraceRecord is organized in a stack structure in DefaultResourceLeak, its next pointer points to the next TraceRecord in the stack, and pos is used to identify the current position of the TraceRecord in the stack, the whole structure is relatively simple and straightforward.

    private static class TraceRecord extends Throwable {
        // unrealized,Used to identify the location of the bottom of the stack
        private static final TraceRecord BOTTOM = new TraceRecord() {
            @Override
            public Throwable fillInStackTrace() {
                return this;
            }
        };
        // Customizations that appear in the log Hint Alerts
        private final String hintString;
        // next on the stack TraceRecord
        private final TraceRecord next;
        // be facing (us) TraceRecord position on the stack
        private final int pos;
    }

image

TraceRecord'stoString() method is used to generate the stack information recorded therein, the implementation is also very simple, is to directly print the stack in the Throwable can be.

        @Override
        public String toString() {
            // Each TraceRecord stack message takes up 2K of memory
            StringBuilder buf = new StringBuilder(2048); if (hintString !
            if (hintString ! = null) {
                // Show our customized hints in the log tHint
                ("\tHint: ").append(hintString).append(NEWLINE);
            }

            // Get the stack information recorded by TraceRecord
            StackTraceElement[] array = getStackTrace(); // Skip the first three elements.
            // Skip the first three elements.
            out: for (int i = 3; i < ; i++) {
                StackTraceElement element = array[i];

                ....... Clean up some useless stack information ......

                // Generate valid stack information
                ('\t');
                (());
                (NEWLINE);
            }
            return ();
        }

3.4 LeakAwareByteBuf

On the memory leak detection of all the core design, here I have finished for you, when we are clear about the background, in retrospect to look at the author at the beginning of the article raised questions, is not more or less some feeling ?

Each time Netty allocates memory, it triggers a memory leak sampling probe, and if it hits the sampling probability, subsequent memory leak tracking is performed on this allocated ByteBuf.

public final class UnpooledByteBufAllocator {
    @Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
        final ByteBuf buf;

        ....... allocate UnpooledByteBuf .....

        // Whether to start memory leak detection,If activated it is additionally activated with the LeakAwareByteBuf Packaging Returns
        return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
    }
}

public class PooledByteBufAllocator {
    @Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {

         ....... allocate PooledByteBuf .....

        // If memory leak detection is turned on,failing agreement LeakAwareByteBuf wrap PooledByteBuf come (or go) back
        return toLeakAwareBuffer(buf);
    }
}

Netty introduces a fourth model, the LeakAwareBuffer, for tracking ByteBuf memory leaks. As you can see from the name, the LeakAwareBuffer is designed to identify whether a memory leak has occurred in a ByteBuf wrapped in the LeakAwareBuffer. byteBuf wrapped in it.

After each hit on the sampling probability, Netty returns a normal ByteBuf wrapped in a LeakAwareBuffer.

    protected static ByteBuf toLeakAwareBuffer(ByteBuf buf) {
        // DefaultResourceLeak is used to track the leak path of the ByteBuf.
        ResourceLeakTracker<ByteBuf> leak.
        switch (()) {
            case SIMPLE.
                // Trigger memory leak sampling detection, if it hits the sampling frequency
                // then create DefaultResourceLeak for ByteBuf (weak reference)
                leak = (buf); // the contents of section 3.1 of this document
                if (leak ! = null) {
                    // SIMPLE level corresponds to SimpleLeakAwareByteBuf
                    buf = new SimpleLeakAwareByteBuf(buf, leak);
                }
                break.
            case ADVANCED.
            case PARANOID.
                // Trigger memory leak sampling detection
                leak = (buf); // Section 3.1.
                if (leak ! = null) {
                    // The ADVANCED , PARANOID level corresponds to AdvancedLeakAwareByteBuf.
                    buf = new AdvancedLeakAwareByteBuf(buf, leak);
                }
                break;
            break; default.
                break; }
        }
        // If the sampling frequency is hit, return it wrapped in LeakAwareByteBuf.
        // If it doesn't hit the sampling frequency, return as is
        return buf; }
    }

The memory leak detection level isSIMPLE case, Netty wraps the ByteBuf with SimpleLeakAwareByteBuf. The memory leak detection level isADVANCED orPARANOID case, Netty wraps the ByteBuf with AdvancedLeakAwareByteBuf.

image

From the class inheritance structure diagram, we can see that both SimpleLeakAwareByteBuf and AdvancedLeakAwareByteBuf are inherited from WrappedByteBuf, which means that they are just a simple decoration of the original plain ByteBuf (decorator design model).

class SimpleLeakAwareByteBuf extends WrappedByteBuf {
   // Common need to be detected ByteBuf
   private final ByteBuf trackedByteBuf;
   // ByteBuf weak reference DefaultResourceLeak
   final ResourceLeakTracker<ByteBuf> leak;

   SimpleLeakAwareByteBuf(ByteBuf wrapped, ResourceLeakTracker<ByteBuf> leak) {
        this(wrapped, wrapped, leak);
    }

   SimpleLeakAwareByteBuf(ByteBuf wrapped, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leak) {
        super(wrapped);
         = (trackedByteBuf, "trackedByteBuf");
         = (leak, "leak");
    }
}

One of the core decorative attributes of LeakAwareByteBuf is leak, which is used to point to the DefaultResourceLeak that is weakly associated with trackedByteBuf. when the DefaultResourceLeak is first created, it is added to the global allLeaks collection.

image

At first the DefaultResourceLeak stack contains only one TraceRecord, at the bottom of the stack, which is used to record the creation location of the trackedByteBuf stack. The trackedByteBuf is created in theSIMPLE At the probe level, only the stack where trackedByteBuf was created will appear in the memory leak log.

image

So there is nothing special about SimpleLeakAwareByteBuf related read , write methods, they are simple proxies for trackedByteBuf.

class SimpleLeakAwareByteBuf extends WrappedByteBuf {
    @Override
    public byte readByte() {
        return ();
    }

    @Override
    public ByteBuf writeByte(int value) {
        (value);
        return this;
    }
}

It's worth talking about SimpleLeakAwareByteBuf'srelease() method, when we finish using SimpleLeakAwareByteBuf, we need to release it manually in time. If the reference count of SimpleLeakAwareByteBuf is 0, you need to additionally turn off the memory leak detection, because there is no memory leak if it is released in time.

    @Override
    public boolean release() {
        // The reference count is 0
        if (()) {
            // Turn off memory leak detection
            closeLeak();
            return true;
        }
        return false;
    }

   private void closeLeak() {
        boolean closed = (trackedByteBuf);
    }

The core steps to turn off memory leak detection for trackedByteBuf are:

  1. First remove the DefaultResourceLeak from the allLeaks set, since allLeaks holds the DefaultResourceLeak corresponding to the trackedByteBuf that has not been released.

  2. Disconnect the weak reference association between DefaultResourceLeak and trackedByteBuf, so that when trackedByteBuf is GC'd, the JVM won't put DefaultResourceLeak into the _reference_pending_list, but will instead reclaim the Instead, the JVM will reclaim DefaultResourceLeak together with trackedByteBuf. As a result, the DefaultResourceLeak will not appear in the refQueue, and the ResourceLeakDetector will not detect it by mistake.

    public void clear() {
         = null;
    }
  1. Empties the TraceRecords saved in the DefaultResourceLeak stack.
private static final class DefaultResourceLeak<T>
            extends WeakReference<Object> implements ResourceLeakTracker<T>, ResourceLeak {

        @Override
        public boolean close() {
            // commander-in-chief (military) DefaultResourceLeak through (a gap) allLeaks Delete in set
            if ((this)) {
                // turn off (electric switch) DefaultResourceLeak together with trackedByteBuf weakly referenced association of
                clear();
                // empty DefaultResourceLeak a wooden or bamboo pen for sheep or cattle
                (this, null);
                return true;
            }
            return false;
        
}

If the SimpleLeakAwareByteBuf forgets to release, then its corresponding DefaultResourceLeak will stay in the allLeaks collection, and when the SimpleLeakAwareByteBuf is GC'd, the JVM will move the DefaultResourceLeak After SimpleLeakAwareByteBuf is GC'd, JVM will put the DefaultResourceLeak into _reference_pending_list, and then wake up the ReferenceHandler thread to transfer the DefaultResourceLeak from _reference_pending_list to refQueue.

image

Probability of detecting a memory leak sampling if it hits when the next memory allocation is madeThen the ResourceLeakDetector takes all the DefaultResourceLeaks collected from the refQueue one by one and determines if they are still in allLeaks.

If it is still in allLeaks, it means that there is a memory leak in the ByteBuf corresponding to the DefaultResourceLeak, and the leak path is stored in the DefaultResourceLeak stack.ERROR The log level printout of the

public class ResourceLeakDetector<T> {

    private void reportLeak() {
        // Detect and report previous leaks.
        for (;;) {
            DefaultResourceLeak ref = (DefaultResourceLeak) (); if (ref == null)
            if (ref == null) {
                break; }
            }
            // Check if the DefaultResourceLeak corresponding to ByteBuf is still in the allLeaks collection.
            if (! ()) {
                // If not, then ByteBuf has been freed in time and there is no memory leak.
                continue; }
            }
            // When a memory leak is detected for ByteBuf, the access stack for ByteBuf is fetched here.
            String records = (); // Print the stack path of the leak.
            // Print the stack path of the leak
            reportTracedLeak(resourceType, records); }
        }
    }
}

These are the memory leak detection levelsSIMPLE The implementation logic of theADVANCED , PARANOID Levels are characterized by the fact that they collect detailed access stacks, so AdvancedLeakAwareByteBuf is based on SimpleLeakAwareByteBuf and decorates the related access methods, such as read , write, etc. What is the decoration? What is the decoration? It is to add the latest stack information to the DefaultResourceLeak stack for each access to AdvancedLeakAwareByteBuf.

final class AdvancedLeakAwareByteBuf extends SimpleLeakAwareByteBuf {

    AdvancedLeakAwareByteBuf(ByteBuf buf, ResourceLeakTracker<ByteBuf> leak) {
        super(buf, leak);
    }

    @Override
    public byte readByte() {
        // Record information about the currently accessed stack
        recordLeakNonRefCountingOperation(leak);
        return ();
    }

    @Override
    public ByteBuf writeByte(int value) {
         // Record information about the currently accessed stack
        recordLeakNonRefCountingOperation(leak);
        return (value);
    }

    static void recordLeakNonRefCountingOperation(ResourceLeakTracker<ByteBuf> leak) {
        if (!ACQUIRE_AND_RELEASE_ONLY) {
            // toward DefaultResourceLeak Adding a new stack
            ();
        }
    }
}

However, a practical problem is that there are so many methods in ByteBuf that it would be too memory intensive to record a stack record for every method accessed in ByteBuf, and the number of TraceRecords in the DefaultResourceLeak stack is subject to the- restricted and cannot be added to the stack indefinitely.

So Netty provides us with a new JVM parameter, the- , defaults to false , which means that by default, every method access to ByteBuf needs to record the stack.

private static final String PROP_ACQUIRE_AND_RELEASE_ONLY = "";

ACQUIRE_AND_RELEASE_ONLY = (PROP_ACQUIRE_AND_RELEASE_ONLY, false);

Setting it to true means that the stack will only be logged for methods that explicitly require it, such astouch Relevant methods.retain() method, andrelease() Methods. None of the other methods record the stack.

    @Override
    public ByteBuf touch() {
        ();
        return this;
    }

    @Override
    public ByteBuf touch(Object hint) {
        (hint);
        return this;
    }

    @Override
    public ByteBuf retain() {
        ();
        return ();
    }

    @Override
    public boolean release() {
        ();
        return ();
    }

As a result of theSIMPLE At the probe level, only the creation of the stack is logged, not the access to it, so none of the SimpleLeakAwareByteBuf access methods will be called.()

class SimpleLeakAwareByteBuf extends WrappedByteBuf {
    @Override
    public ByteBuf touch() {
        return this;
    }

    @Override
    public ByteBuf touch(Object hint) {
        return this;
    }
}

summarize

The following five conditions need to be met in order to trigger Netty's memory leak detection mechanism:

  1. The application must have memory leak detection turned on.

  2. The memory leak can only be detected after the ByteBuf has been GC'd. If the GC has not been triggered, then even if the ByteBuf does not have any strong references or soft references, there is no way to detect the memory leak.

  3. When GC occurs, it must be until the next allocation of memory that memory leak detection is triggered. If no memory request occurs, then no memory leak detection occurs.

  4. Netty does not detect leaks for every ByteBuf, but rather samples them based on a certain sampling interval. So to trigger memory leak detection, you need to reach a certain sampling interval.

  5. The application's logging level must have Error turned on because memory leak reports are output by Netty at the Error level of logging, and if the logging level is below Error, then memory leak reports are not output.

We can pass the JVM parameter- Set different detection levels for the application:

  1. DISABLED Indicates that memory leak detection is disabled.

  2. SIMPLE, on the other hand, samples memory leaks and detects them with the JVM parameter- to set the sampling frequency for memory leak detection. The memory leak report will only contain stack information about where the ByteBuf was created.

  3. ADVANCED also performs sample probing, but more detailed information is reflected in the memory leak report, for example, ByteBuf's related access path stack information, and the leak stack that can be captured is affected by the- Limitations of the parameter.

  4. PARANOID, on the other hand, is based on ADVANCED and probes all ByteBufs in the system in full. The level is the highest, the information is the most complete, and the consumption is also the largest.

Well, that's it for today, we'll see you in the next post ~~~~~