ES - Possible to use RAM and Secondary Storage if Heap is full?

Hi,

Is it possible to say use 16GB MAX RAM and if more data comes to indexes shuffle between RAM and secondary storage?

What I got for now is there is a possibility that if somehow all documents being searched then all documents will ultimately stay in heap forever... is it right assumption?

Thanks in advance!

That's already how Elasticsearch works. Heap is just allocated for current actions, RAM is just for buffering, indices are stored in secondary storages (files) on distributed machines.

Sorry to repeat. But are we saying we can have 16GB RAM for index that is in total of 1TB (as an example)?

At which point it decides that specific document will leave RAM (HEAP) and other will be loaded into heap?

RAM / memory size does not determine maximum index size. There is no relation between the two.

If you index documents, they are flushed to disk automatically by Elasticsearch. This is controlled by document count, document size, or a given interval (e.g. every 5 seconds).

If you search documents, Elasticsearch will do the heavy task of deciding how to use RAM for you. Only relevant files are buffered in RAM and only relevant search results occupy the heap. The exact procedure depends on the query type, the index configuration, and some other things (like caching or concurrency).