I should give leas than half of my memory to Lucene. It also suggests that I says that I should not assign more than 32GB to elasticsearch JVM heap size.
My query is that I have machine with larger memory (200-300 GB). Is there any way I could use my spare memory to improve elasticsearch performance.
If heap pressure and GC is having an impact on your performance, deploying more than one node with a heap just under 32GB on such a host can help. If your performance bottleneck on the other hand is e.g. disk I/O, that may however not necessarily help a lot.
Have you determined what is limiting performance in your setup?
I am not sure exactly what is causing the issue, but Elasticsearch node is taking more time during startup (after startup it works normally). So I was wondering if increasing initial/maximum JVM heap size will have impact on that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.