Kibana memory usage


I'm wondering if anyone is monitoring their Kibana Node app?

Check this:

Is anyone else seeing similar memory/GC patterns? It's obviously problematic and I'm wondering if anyone else is monitoring Kibana and is seeing the same sort of thing?


Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Elasticsearch Consulting & Support *

Which Version?


a little update on this issue:
We use Kibana 4.1.1
The chart from Otis is using io.js 2.3 - we had the same issue 0.10.35 (growing memory). Then we disabled Node.js monitoring to exclude potential influence of the monitoring function itself and watched daily the RSS using (ps aux|grep kibana) and after 3 days we had again >580 MB RSS - so the memory growing problem was not gone. Finally we installed a few days ago Node v4.0.0 and charts look much better (util now). The RSS Memory remains on a level of 60-76 MB - :). Still, we see growing GC activity and litle increase in Heap Memory. The EventloopLatency is not growing much - just a little. I will further investigate in the growing number of GC runs.
Interesting that the time spend for GC was only high in node 0.10, very little in io 2.3, and even lees in Node 4.0.

The Last few days running Node 4.0.0:

The last 60 days, see the comments, where we changed the Node.js version:

we're running Kibana 4.2.1 on a centos 6.7 box. Anyway, we're seeing similar issues with Kibana memory usage ie. it quickly spirals out of control (over 512mb) and is then killed by OOM

Is it possible to limit memory usage?

A bunch of leaks were fixed in 4.3 and backported to 4.2.1 and 4.1.3: