We deployed a 2 node Elasticsearch cluster with 2*30 GB memory.
I noticed when there is X GB data in cluster, then roughly X GB memory used.
Today es used 35 GB memory already with 55.9 GB data , what will happens if we have 100 GB data? Will Elasticsearch go down with no java heap space exception?
What's more, if we only have X GB memory, does it means we could hold (roughly) X GB data at most in Elasticsearch?
If we don't want to add more nodes, is it the only choice to close old index and maintain the data amount which es holds not exceed a threshold(even though there is no formula for that threshold)?
Closing or deleting indexes isn't the only way of reducing the cluster's memory pressure, but short-term (if the heap utilization is near the ceiling) it's probably the easiest way. Other things to do could include changing the mapping use less memory and, if you have many shards, reduce the number of shards.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.