I know that es node uses memory both for jvm and file cache. If I have an node with 50GB memory and set jvm using 25GB, is that expected when the total memory usage is high like 48GB and never go down?
Will it finally consume all memory and the node become dead?
That's managed by the OS as it caches commonly accessed files. It's why we suggest keeping heap to 50% of total memory, so that it can do this and improve the speed of Elasticsearch.
The OS will continue to manage this and won't run out of memory.
Is there a bar like how many percentage the memory is used then OS will handle? I'm monitoring the memory usage on my cluster and when some node reach a high usage, it will trigger an alert. Now the bar is set to 90% and obviously it's easy to reach. Shoud I raise it higher like 95% or there's no need to care?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.