I am running elasticsearch 5.4 on a 120 GB machine in production env
What should be a good heap size for the machine?
Shall we keep the heap size below 32GB as mentioned in the elasticsearch documentation or shall we increase the heap size to 50% of the physical memory of the machine (60GB)
The article says
In fact, it takes until around 40–50 GB of allocated heap before you have the same effective memory of a heap just under 32 GB using compressed oops
But since we have machines with 120GB ram, we can afford to give around 60-70 GB of ram to the heap
Also, if we just allocate 30 GB to heap, would elasticsearch would make good use of the remaining 90 GB ram?
Just want to make sure, we don't under/over allocate heap memory for our production machines
The point of that blog post is being missed entirely. The answer to how much heap should you have is independent of the physical amount of memory on the system. The amount of physical memory on the system only serves as a constraint. The main input into how much heap should you have is dependent on your workloads. For this you have to do some measuring and tuning that only you can do.
And yes, whatever is leftover for the filesystem cache will be utilized by Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.