We deployed a 2 node Elasticsearch cluster with 2*30 GB memory.
I noticed when there is X GB data in cluster, then roughly X GB memory used.
Today es used 35 GB memory already with 55.9 GB data , what will happens if we have 100 GB data? Will Elasticsearch go down with
no java heap space exception?
What's more, if we only have X GB memory, does it means we could hold (roughly) X GB data at most in Elasticsearch?