We have a 32GB c5.4xlarge ( 4 nodes cluster ) for the elastic search and the elastic search process is getting killed after a period of 3 weeks and there is no enough memopry left to restart the process. So had to manually clear the cache and reduce the heap memory settings to get the process started.
Which version of Elasticsearch are you using? Older releases in the 7.x series allowed off-heap allocation of up to the heap size; this has been reduced to half the heap size in releases newer than 7.3 IIRC. You may want to reduce the -Xms and -Xmx values from its current values of 15GB.
Im using Elasticsearch 7.5. is the rule of allocating ~50% of available RAM is not reqd for this version so elastic will have enough for in-memory processing ?
Sure. Unless your nodes really need a large amount of heap space (e.g. for large aggregations, or to ingest very big documents) you can reduce the allocated heap space. Make sure you monitor heap space in Kibana's monitoring app to detect when memory gets tight. This typically shows up as the heap size being at or higher than the occupancy fraction for long periods of time.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.