--> 1. The allocation of RAM for the HEAP should not exceed 32GB RAM or less than 50% of RAM, whichever is lesser, even if there is more RAM available due to JVM pointer functionality. So the safe bet is 31GB RAM when we have > 64GB RAM..
--> 2. It also says "A machine with 64 GB of RAM is the ideal sweet spot, but 32 GB and 16 GB machines are also common." and no where it mentioned we can have more RAM as allocate < 32GB for HEAP.
--->3. also says "Less than 8 GB tends to be counterproductive (you end up needing many, many small machines), and greater than 64 GB has problems "
In the below URL under "I Have a Machine with 1 TB RAM!" section, if I understand correctly, we can allocate < 32GB RAM for HEAP and let remaining all (968GB) RAM for Lucene to use for faster search response (& other OS activities?) .. offcourse I'm not planning anything bigger than 128GB or 256GB RAM..
In the Video in URL https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing , it says, it is better to have 1:16 Ratio of RAM:Harddisk.
So, I wanted to get clarification on the followings-
- I'm planning to go with 128GB or More RAM and allocate 31GB RAM for HEAP and letting 97GB+ RAM for Lucene & other activity. do you see any concerns?
- Tomorrow due to high number of user searches and getting slow response, can I still increase the RAM to 256GB or 512GB and continue 31GB RAM for HEAP.
I totally understand we should not cross 32GB RAM for HEAP allocation. I wanted to understand what is problem if we allocate more RAM (100+GB) to the Lucense and other OS activities ..