I have Elasticsearch v 2.4.1 cluster, 4 nodes, 1 master, 3 data, 8g heap size each node, 16g RAM each node, 4 cpu cores each node and 6 indices.
I use 3 indices as my primary indices, and when I have to index some new documents, I do a snapshot and restore to a back up indice and use it for read operations while I use the primary to index the new docs.
The heap last at most 16 hours before the difference between the ceil and the floor is minimum and the GC can not free enough memory.
I try to clear the cache, but it does not free the heap. I have mlockall = true in all the nodes.
In the elasticsearch.yml I have the following configuration:
script.engine.groovy.inline.search: on
script.engine.groovy.inline.aggs: on
index.cache.field.type: soft
indices.memory.index_buffer_size: 30%
index.store.type: niofs
index.translog.flush_threshold_ops: 50000
discovery.zen.ping.multicast.enabled: false
index.requests.cache.enable: true
index.cache.query.enable: true
indices.cache.filter.size: 2%
index.cache.filter.enable: true
I take this screenshot from Marvel, the heap low again when I restart all the nodes:
The cluster health:
The data size and memory seems to be ok:
Why is the heap stuck like that?