Is anyone else seeing similar memory/GC patterns? It's obviously problematic and I'm wondering if anyone else is monitoring Kibana and is seeing the same sort of thing?
a little update on this issue:
We use Kibana 4.1.1
The chart from Otis is using io.js 2.3 - we had the same issue 0.10.35 (growing memory). Then we disabled Node.js monitoring to exclude potential influence of the monitoring function itself and watched daily the RSS using (ps aux|grep kibana) and after 3 days we had again >580 MB RSS - so the memory growing problem was not gone. Finally we installed a few days ago Node v4.0.0 and charts look much better (util now). The RSS Memory remains on a level of 60-76 MB - :). Still, we see growing GC activity and litle increase in Heap Memory. The EventloopLatency is not growing much - just a little. I will further investigate in the growing number of GC runs.
Interesting that the time spend for GC was only high in node 0.10, very little in io 2.3, and even lees in Node 4.0. https://apps.sematext.com/spm-reports/s/fouGUCzwrH
we're running Kibana 4.2.1 on a centos 6.7 box. Anyway, we're seeing similar issues with Kibana memory usage ie. it quickly spirals out of control (over 512mb) and is then killed by OOM
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.