Home / Class/ AdaptivePoolingAllocator Class — netty Architecture

AdaptivePoolingAllocator Class — netty Architecture

Architecture documentation for the AdaptivePoolingAllocator class in AdaptivePoolingAllocator.java from the netty codebase.

Entity Profile

Dependency Diagram

graph TD
  9bc56cad_0c0c_4043_82ba_cb42dfea6004["AdaptivePoolingAllocator"]
  fee3fa6d_a7fb_30d6_ea34_49602c633a2c["AdaptivePoolingAllocator.java"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|defined in| fee3fa6d_a7fb_30d6_ea34_49602c633a2c
  cc6da6d1_4687_b652_19f1_566a14b3c36b["AdaptivePoolingAllocator()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| cc6da6d1_4687_b652_19f1_566a14b3c36b
  b683fc3d_12e9_93d8_78a7_c37649de3767["createMagazineGroupSizeClasses()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| b683fc3d_12e9_93d8_78a7_c37649de3767
  6097292b_6fd2_5b6e_7a25_48a2c0820ac8["createSharedChunkQueue()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 6097292b_6fd2_5b6e_7a25_48a2c0820ac8
  53fec213_35b7_be46_9b3d_b836bd574fc2["ByteBuf()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 53fec213_35b7_be46_9b3d_b836bd574fc2
  546842cb_8674_f9d5_6751_ee84ff075ac8["AdaptiveByteBuf()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 546842cb_8674_f9d5_6751_ee84ff075ac8
  cea02977_b597_4dc0_ec47_603690c1db55["sizeIndexOf()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| cea02977_b597_4dc0_ec47_603690c1db55
  2e882b7f_81d8_3093_0746_4b18e0474137["sizeClassIndexOf()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 2e882b7f_81d8_3093_0746_4b18e0474137
  3f9e0707_1a33_64a6_c265_760ccb6bb1ee["getSizeClasses()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 3f9e0707_1a33_64a6_c265_760ccb6bb1ee
  ca66c6dd_0956_8084_3313_5ee588de1349["Magazine()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| ca66c6dd_0956_8084_3313_5ee588de1349
  1ffe8f20_ccfd_e36a_bc9a_a6eac8ef4ccf["reallocate()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 1ffe8f20_ccfd_e36a_bc9a_a6eac8ef4ccf
  23251dba_16c8_89ce_b29e_494c1689a0cc["usedMemory()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 23251dba_16c8_89ce_b29e_494c1689a0cc
  3c2dbfd4_65ab_8530_5215_c8a858b8f051["finalize()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 3c2dbfd4_65ab_8530_5215_c8a858b8f051
  08459dfe_dddb_9dbd_ad3a_4497f81d4b9e["free()"]
  9bc56cad_0c0c_4043_82ba_cb42dfea6004 -->|method| 08459dfe_dddb_9dbd_ad3a_4497f81d4b9e

Relationship Graph

Source Code

buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java lines 83–2070

@UnstableApi
final class AdaptivePoolingAllocator {
    private static final int LOW_MEM_THRESHOLD = 512 * 1024 * 1024;
    private static final boolean IS_LOW_MEM = Runtime.getRuntime().maxMemory() <= LOW_MEM_THRESHOLD;

    /**
     * Whether the IS_LOW_MEM setting should disable thread-local magazines.
     * This can have fairly high performance overhead.
     */
    private static final boolean DISABLE_THREAD_LOCAL_MAGAZINES_ON_LOW_MEM = SystemPropertyUtil.getBoolean(
            "io.netty.allocator.disableThreadLocalMagazinesOnLowMemory", true);

    /**
     * The 128 KiB minimum chunk size is chosen to encourage the system allocator to delegate to mmap for chunk
     * allocations. For instance, glibc will do this.
     * This pushes any fragmentation from chunk size deviations off physical memory, onto virtual memory,
     * which is a much, much larger space. Chunks are also allocated in whole multiples of the minimum
     * chunk size, which itself is a whole multiple of popular page sizes like 4 KiB, 16 KiB, and 64 KiB.
     */
    static final int MIN_CHUNK_SIZE = 128 * 1024;
    private static final int EXPANSION_ATTEMPTS = 3;
    private static final int INITIAL_MAGAZINES = 1;
    private static final int RETIRE_CAPACITY = 256;
    private static final int MAX_STRIPES = IS_LOW_MEM ? 1 : NettyRuntime.availableProcessors() * 2;
    private static final int BUFS_PER_CHUNK = 8; // For large buffers, aim to have about this many buffers per chunk.

    /**
     * The maximum size of a pooled chunk, in bytes. Allocations bigger than this will never be pooled.
     * <p>
     * This number is 8 MiB, and is derived from the limitations of internal histograms.
     */
    private static final int MAX_CHUNK_SIZE = IS_LOW_MEM ?
            2 * 1024 * 1024 : // 2 MiB for systems with small heaps.
            8 * 1024 * 1024; // 8 MiB.
    private static final int MAX_POOLED_BUF_SIZE = MAX_CHUNK_SIZE / BUFS_PER_CHUNK;

    /**
     * The capacity if the chunk reuse queues, that allow chunks to be shared across magazines in a group.
     * The default size is twice {@link NettyRuntime#availableProcessors()},
     * same as the maximum number of magazines per magazine group.
     */
    private static final int CHUNK_REUSE_QUEUE = Math.max(2, SystemPropertyUtil.getInt(
            "io.netty.allocator.chunkReuseQueueCapacity", NettyRuntime.availableProcessors() * 2));

    /**
     * The capacity if the magazine local buffer queue. This queue just pools the outer ByteBuf instance and not
     * the actual memory and so helps to reduce GC pressure.
     */
    private static final int MAGAZINE_BUFFER_QUEUE_CAPACITY = SystemPropertyUtil.getInt(
            "io.netty.allocator.magazineBufferQueueCapacity", 1024);

    /**
     * The size classes are chosen based on the following observation:
     * <p>
     * Most allocations, particularly ones above 256 bytes, aim to be a power-of-2. However, many use cases, such
     * as framing protocols, are themselves operating or moving power-of-2 sized payloads, to which they add a
     * small amount of overhead, such as headers or checksums.
     * This means we seem to get a lot of mileage out of having both power-of-2 sizes, and power-of-2-plus-a-bit.
     * <p>
     * On the conflicting requirements of both having as few chunks as possible, and having as little wasted
     * memory within each chunk as possible, this seems to strike a surprisingly good balance for the use cases
     * tested so far.
     */
    private static final int[] SIZE_CLASSES = {
            32,
            64,
            128,
            256,
            512,
            640, // 512 + 128
            1024,
            1152, // 1024 + 128
            2048,
            2304, // 2048 + 256
            4096,
            4352, // 4096 + 256
            8192,
            8704, // 8192 + 512
            16384,
            16896, // 16384 + 512
    };

Frequently Asked Questions

What is the AdaptivePoolingAllocator class?
AdaptivePoolingAllocator is a class in the netty codebase, defined in buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java.
Where is AdaptivePoolingAllocator defined?
AdaptivePoolingAllocator is defined in buffer/src/main/java/io/netty/buffer/AdaptivePoolingAllocator.java at line 83.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free