Ideal heap size for large RAM machine

As suggested here:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html

I should give leas than half of my memory to Lucene. It also suggests that I says that I should not assign more than 32GB to elasticsearch JVM heap size.

My query is that I have machine with larger memory (200-300 GB). Is there any way I could use my spare memory to improve elasticsearch performance.

If heap pressure and GC is having an impact on your performance, deploying more than one node with a heap just under 32GB on such a host can help. If your performance bottleneck on the other hand is e.g. disk I/O, that may however not necessarily help a lot.

Have you determined what is limiting performance in your setup?

I am not sure exactly what is causing the issue, but Elasticsearch node is taking more time during startup (after startup it works normally). So I was wondering if increasing initial/maximum JVM heap size will have impact on that.

Can you provide the full output of the cluster stats API?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.