How much heap size would be good for a machine having 120GB Ram

I am running elasticsearch 5.4 on a 120 GB machine in production env

What should be a good heap size for the machine?

Shall we keep the heap size below 32GB as mentioned in the elasticsearch documentation or shall we increase the heap size to 50% of the physical memory of the machine (60GB)

Thanks,
Nitish

The 31.5 GB limit is not the whole story.

This blog article is also worth going through:

Odds are that you'll end up with ~31GB but it's worth measuring the exact cut-off point.

I have gone through the article
There are two cut-offs mentioned for the heap space

  1. 50% of the physical memory
  2. ~30GB

We are using G1GC for garbage collection. My query is which cut-off shall we follow for a 120 GB machine?

Clearly the second one and here's why

The article says
In fact, it takes until around 40–50 GB of allocated heap before you have the same effective memory of a heap just under 32 GB using compressed oops

But since we have machines with 120GB ram, we can afford to give around 60-70 GB of ram to the heap
Also, if we just allocate 30 GB to heap, would elasticsearch would make good use of the remaining 90 GB ram?

Just want to make sure, we don't under/over allocate heap memory for our production machines

The point of that blog post is being missed entirely. The answer to how much heap should you have is independent of the physical amount of memory on the system. The amount of physical memory on the system only serves as a constraint. The main input into how much heap should you have is dependent on your workloads. For this you have to do some measuring and tuning that only you can do.

And yes, whatever is leftover for the filesystem cache will be utilized by Elasticsearch.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.