Our production ES cluster has 10 nodes in which:
6 x 192 GB RAM for hot indexes (data within 2 weeks)
2 x 32GB RAM for cold indexes (data older than 2 weeks)
2 x 32 GB RAM client nodes/master only nodes.
- Running ES 1.7.1
- Application logs are stored into daily indexes with around 70GB each index, high EPS I guess.
- Each node with 192GB RAM has an ES instance with heap size of 90GB and
index.store.type: memory. Hot indexes are stored in RAM only, cold indexes will be moved to 2 cold ES nodes.
- Data on ES are solely for full text searching, and each message is relatively large.
Some of my questions:
- Is it a good setup/config to have ES node with 90GB of ES_HEAP_SIZE even when using
- Would it be better to switch to SSD and run more ES instances on 6x 192GB RAM servers with ES_HEAP_SIZE 31G?
- Any suggestion on setting up a cluster that indexes around 70 to 100GB per day for 2 weeks and provides fast searching/querying?
I keep seeing recommendations that we should keep ES_HEAP_SIZE <= 30.5G, how about my case with 90G? Unfortunately, I have not had enough info on the cluster.