X GB data in Elasticsearch then X GB memory used?

We deployed a 2 node Elasticsearch cluster with 2*30 GB memory.

I noticed when there is X GB data in cluster, then roughly X GB memory used.

Today es used 35 GB memory already with 55.9 GB data , what will happens if we have 100 GB data? Will Elasticsearch go down with no java heap space exception?

What's more, if we only have X GB memory, does it means we could hold (roughly) X GB data at most in Elasticsearch?

What's more, if we only have X GB memory, does it means we could hold (roughly) X GB data at most in Elasticsearch?

No, it's not that simple. There is no formula that can be used to calculate the heap needed for a given amount of data.

Thanks for your reply!

If we don't want to add more nodes, is it the only choice to close old index and maintain the data amount which es holds not exceed a threshold(even though there is no formula for that threshold)?

Closing or deleting indexes isn't the only way of reducing the cluster's memory pressure, but short-term (if the heap utilization is near the ceiling) it's probably the easiest way. Other things to do could include changing the mapping use less memory and, if you have many shards, reduce the number of shards.

Thanks for your suggestions!