Memory consumption in io.netty.buffer.PoolThreadCache

Hi guys,
I'm facing a large memory consumption in io.netty.buffer.PoolThreadCache.
This is happening in our CI environment, where lots of tests are executed, and lots of index / search / delete are performed in ES.
I don't believe this is a memory leak at all.

The problem is: our Junits must run accordingly.

Checking the revision when the problem started to happen, it seems that this problem is somehow related to a given piece of code like this:

void addDocuments(...)
{
    BulkProcessor bulkProcessor = ...

    while(...) // huge loop
    {
        bulkProcessor.add(...);
    }

    bulkProcessor.flush();
    bulkProcessor = null;
}

This method is called lots and lots of time during our tests.
I though when flush() was called, all in-memory requirements for the given requests would be immediately freed, but apparently they are not.

In this case, what could be done? Is there an option to force flush be synced, so the memory is freed immediately?

UPDATE:
ES version: 5.4.3

Thanks in advance!

Hi @tberne,

Netty is used internally by Elasticsearch in its networking layer. Netty has different strategies how it allocates objects, most notably you can decide between pooled usage (i.e. objects get reused) and unpooled usage. The default in Elasticsearch is to use Netty's object pooling. That usually makes sense for production as (given sufficient heap) object pooling reduces GC pressure.

In your case, you seem to have just limited heap (I see that instances of said class take up ~ 167MB). You can try to use the unpooled allocator by setting -Dio.netty.allocator.type=unpooled in the JVM options. This should reduce memory consumption but at the price of increased GC pressure which is probably ok for your tests.

Daniel

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.