If I have a data node with 128GB of RAM. Is it better to allocate 64GB to ES and 64GB to system?
Or would it be ok (or even better) to allocate say 100GB to ES and leave 28GB to system?
Our system is write heavy; therefore, the idea about second approach (instead of 50-50).
I know above 32GB will disable compressed pointer, so I need at least 48GB to get back to the same usable RAM. Since I have so much RAM, would it be better to allocate say above 96GB to ES?
Thanks.
Sure. I will always test it. The reason for this question is to see if there's any insight from ppl who might know something. Or done something with similar situation.
From your answer I gather the amount of system RAM requirement is simply depending on how much query of "older" (not recently written) documents.
If my write to read ratio is really lopsided like 99% to 1%, then too much system RAM is wasteful.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.