Elasticsearch 2.4.1 Cluster stuck on Heap high usage

I have Elasticsearch v 2.4.1 cluster, 4 nodes, 1 master, 3 data, 8g heap size each node, 16g RAM each node, 4 cpu cores each node and 6 indices.

I use 3 indices as my primary indices, and when I have to index some new documents, I do a snapshot and restore to a back up indice and use it for read operations while I use the primary to index the new docs.

The heap last at most 16 hours before the difference between the ceil and the floor is minimum and the GC can not free enough memory.

I try to clear the cache, but it does not free the heap. I have mlockall = true in all the nodes.

In the elasticsearch.yml I have the following configuration:

script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on

index.cache.field.type: soft
indices.memory.index_buffer_size: 30%
index.store.type: niofs
index.translog.flush_threshold_ops: 50000
discovery.zen.ping.multicast.enabled: false

index.requests.cache.enable: true

index.cache.query.enable: true

indices.cache.filter.size: 2%
index.cache.filter.enable: true

I take this screenshot from Marvel, the heap low again when I restart all the nodes:

The cluster health:

The data size and memory seems to be ok:

Why is the heap stuck like that?

If you are using groovy scripts look at this topics: https://discuss.elastic.co/t/elastic-search-using-a-lot-of-memory-gc-thrashing/48695 and https://discuss.elastic.co/t/heap-issue-after-upgrading-elasticsearch-from-v1-7-5-to-v2-4-1/65612

Thanks rusty!

I think is that the cause of the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.