I have a doubt about JVM usage with Elasticsarch & Kibana. For example, I have an Elasticsearch 1.7.5 and Kibana 3. In a normal conditions Elasticsearch has 30% heap usage, when I use Kibana and I try to obtain information, jvm heap usage increase to 60% or 70%.
When I stop using Kibana, this JVM heap usage remains, so, how long time Elasticsearch
needs to free this JVM? Is there any way to accelerate it?
Another question, it would be advisable to use Elasticsearch 2.x or 5.x to improve it?
Is there any problem that the JVM usage is still at 60% or 70%?
Unless you are seeing errors or warnings about the garbage collector, I'd not really care about this.
First of all, thanks for your answer. Regarding to my doubt,the problem is that if I come back to use Kibana 3 (for example, another Dashboards or the same), JVM usage increases until 90% or 100%. So , the errors appears like (example of Internet):
You are running low on memory for sure. So answering now to this question:
it would be advisable to use Elasticsearch 2.x or 5.x to improve it?
Yes. Definitely. Before 2.x you were using I guess field data a lot. It has been "replaced" by doc_values.
Also, lot of improvements since Kibana3 when it comes to querying ES.
Hello! I 'm using 2.4 version of Elasticsearch and I'm having the same issues. Even if I close all the Indexes the JVM doesn't decrease. Is there any place where I can find docs about the internal processes of Elasticsearch? Is there any way to reduce of memory usage?
I've tried to clean, shrink and forcemerge to: get the cache cleaned and reduce the shards/indexes but nothing worked for me.
Thank you in advance,
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.