Ideal heap size for large RAM machine


(vivek) #1

As suggested here:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html

I should give leas than half of my memory to Lucene. It also suggests that I says that I should not assign more than 32GB to elasticsearch JVM heap size.

My query is that I have machine with larger memory (200-300 GB). Is there any way I could use my spare memory to improve elasticsearch performance.


(Christian Dahlqvist) #2

If heap pressure and GC is having an impact on your performance, deploying more than one node with a heap just under 32GB on such a host can help. If your performance bottleneck on the other hand is e.g. disk I/O, that may however not necessarily help a lot.

Have you determined what is limiting performance in your setup?


(vivek) #3

I am not sure exactly what is causing the issue, but Elasticsearch node is taking more time during startup (after startup it works normally). So I was wondering if increasing initial/maximum JVM heap size will have impact on that.


(Christian Dahlqvist) #4

Can you provide the full output of the cluster stats API?


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.