Normally we allocate 50% of memory to ES and 50% to OS, what about in cases where the amount of disk space taken up is less than 50% of memory, does it make any sense to leave so much memory for the OS, or would there be more benefit in having a larger cache? We are targeting really low-latency responses. We are also using SSDs so disc access is pretty fast.
Here are some scenarios I am thinking about
each node has about 15gigs of data.
current: 60 gigs of ram 30 for es, 30 for os
option a: 60 gigs of ram 40 for es 20 for os
option b: 60 gigs of ram 50 for es 10 for os
option c: 30 gigs of ram 20 for es 10 for os, but we will have more machines and less data per machine.
Good call, I forgot about that. So I guess the real question is if I want more machines with less memory, I feel like I am wasting memory at 60gigs if I can't allocate more to ES
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.