We are running a Elasticsearch (7.16.1, Docker inside Kubernetes) cluster which is monitored by a Kibana Stack. After 5-6 days (sometimes sooner) Elasticsearch is running into JVM Heap issues/errors. Is there any way to tell Elasticsearch to clean its heap more frequently or to a lower limit? Would it help to just increase the heap size or would the problem just be after a longer time period then?
The Images show the increase in heap usage. Drop comes after a restart of the node.
Hey thanks for your help!
What log file are you exactly refering to? Currently we're using the default docker settings and there doesn't seem to be any logging activated.
Please don't post pictures of text, logs or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.