What is the relation between total index size and heap usage? I've got 9 indeces with a total size of 180MB however heap usage is over 3GB.
Is this normal and why is it so high? Does it keep searches in memory until garbage cleaning trows data out? I did notice this morning (after nobody doing any searches for 12+ hours) heap usage was much lower.
Elasticsearch always tries to allocate all of the heap you configured, so it looks occupied if you use tools like top.
You can use the node stats to get some more overview where your heap is going, as there are various caches at play in the JVM heap. And of course there is always the chance of a heap dump.
Also, if you have too much heap and no actions, there might be no clean up happening by the JVM until you are hitting a certain amount of memory used.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.