JVM heap usage issue due to percolate queries


(Anatoly Petkevich) #1

Hi,
Document percolation in our environment periodically causes high JVM usage with a steady grow and reaches max memory limit.
We cannot figure out what causes this issue, but it looks like as a memory leak related to dandling in-memory temporary Lucene indices used by percolator.
Many input document have nested fields.

ES version is 5.2.2.
Thank you.


(Thiago Souza) #2

Hello Anatoly,

In 5.x, the class org.elasticsearch.index.cache.bitset.BitsetFilterCache is actually the node query cache. Indeed ~75% is too high (default is 10%).

Check if your elasticsearch.yml sets indices.queries.cache.size.

Regards


#3

Thiago,

We don't set indices.queries.cache.size, leaving it by default.
The issue affects only nodes having percolate indices (see heap dump from one of those nodes in Anatoly's post above).
The heap dump below is from one of the healthy nodes in the same cluster that don't have percolate indices:

Thank you


(Thiago Souza) #4

Hello,

Please attach your complete elasticsearch.yml

Cheers


#5

Thiago,

See below elasticsearch.yml from our test cluster where the issue is reproduced (some settings not related to the issue were omitted, like discovery, xpack etc.):

bootstrap.memory_lock: true
indices.memory.index_buffer_size: 30%
indices.memory.min_index_buffer_size: 96mb
thread_pool.bulk.size: '3'
thread_pool.bulk.queue_size: "-1"

Thanks


(Anatoly Petkevich) #6

Hello,
Could be this issue related to [Percolator] Remove caching and support "now" in range queries ?


(Anatoly Petkevich) #7

A bug https://github.com/elastic/elasticsearch/issues/24108 has been created


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.