Node memory sizing advice

Normally we allocate 50% of memory to ES and 50% to OS, what about in cases where the amount of disk space taken up is less than 50% of memory, does it make any sense to leave so much memory for the OS, or would there be more benefit in having a larger cache? We are targeting really low-latency responses. We are also using SSDs so disc access is pretty fast.

Here are some scenarios I am thinking about

each node has about 15gigs of data.

current: 60 gigs of ram 30 for es, 30 for os
option a: 60 gigs of ram 40 for es 20 for os
option b: 60 gigs of ram 50 for es 10 for os
option c: 30 gigs of ram 20 for es 10 for os, but we will have more machines and less data per machine.

Any thoughts?

Thanks in advance!

Don't forget that one of the main constraint you are facing is how much memory can the JVM use efficiently.

Above 32GB heap, the JVM can not use compressed pointers, so you are wasting memory until you are using much more than 32GB.

See the following link for a discussion on this.

https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html

I've found that 30GB works well for our 64GB nodes. The rest are left for the OS.

Good call, I forgot about that. So I guess the real question is if I want more machines with less memory, I feel like I am wasting memory at 60gigs if I can't allocate more to ES

Yeah you probably are wasting the ram if there isn't enough data to fill
the disk cache. You could check the free command.

Don't think of heap size as cache. It's more accurate to think of it as
working memory and everything else as cache.

True, but filter cache does sit inside the heap. Yup, as expected 15gigs of ram are unused. Thanks!