Kibana 4.2 server memory usage

Is anyone else seeing insane memory growth of the Kibana 4.2 node process serverside? I restarted it before going to bed where it starts at just over 100MB, left it going overnight without touching the browser side at all and it's grown to over 500MB already.

Running on Debian 7.9

We recently fixed some listener leaks, but there could be more. Some users have reported seeing memory leaks in 4.2, but that hasn't been reproduced on our side yet: https://github.com/elastic/kibana/issues/5170

What is the server doing? Are you running a dashboard that is frequently refreshing? Feel free to report more details in the issue I linked above or file a new issue, if you feel your problem is different.

During the test the server was doing absolutely nothing at all. I left it overnight, without touching any dashboards in Kibana, and yet memory grows.

for the record I'm seeing the same thing, Kibana 4.2 had been running for 4-5 days and its memory usage was at ~1GB, upon restarting it sits at ~100mb.

There was not much 'dashboard activity', but I have been running a few queries in 'discover' over the past few days.

I also noticed increasing memory usage in both kibana 4.2 and a kibana 4.3 installation on ubuntu server 14.04.
Kibana gets proxied through ngnix server with SSL encryption. My workaround ( for the moment ) was to restart
kibana every 8 hours, which resets memory usage to the point where is has been in version 4.1.
After aprox. 8 hours memory usage grows from 100 mb to 600 mb. The downside is there is lots of ruckus after kibana got restarted with reports of failed visualisations, bad gateways and connection errors that tend to stay
if you don't reset your browser session.

Same here using Kibana 4.3. It grows quite a bit. On one system, it reached 1Gb (and I don't think anybody ever connected to it).

Restarting it drops the memory usage back to 100Mb.

What systems are you experiencing this on?

Ubuntu 14.04 LTS. Need more details?

The more the better, you have logging turned on? any files? what version of kibana, what version of ES, do you have shield installed, what about any other ES plugins?

I don't think I have logging on. Kibana 4.3. ElasticSearch 2.1.0.

No plugin on ElasticSearch.

same here, kibana 4.3.1 on CentOS:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
15858 root      20   0 2349m 1.4g 4632 S  0.9 49.7   4:21.36 /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/cli

same for kibana 4.3.1 rhel 6. Watching the kibana node process on top and seeing the memory slowly increasing on it with nothing accessing kibana after restart. I tried adding the --max-old-space-size option to kibana exec, but its doesn't seem to have any affect

57505 kibana 20 0 1681m 817m 9252 S 1.0 21.4 0:45.85 /opt/kibana/bin/../node/bin/node --max-old-space-size=500 /opt/kibana/bin/../src/cli

Has anyone tested kibana 4.4 ? does it suffer the same memory leak?

@mnhan Yes - I'm seeing the same behavior with 4.4.1 (build 9693) on Ubuntu Trusty.

@Stuart_Donovan I filed an issue at https://github.com/elastic/kibana/issues/6153

Only happens when I enable SSL on kibana, which of course one needs when one puts ES behind an ssl proxy

I also see the same thing after upgrading from 1.3 to 4.6.1. It seems to be periodical. Usage goes up to 14GB.

The server is installed on Ubuntu slim docker image and has Timelion and Mathlion plugins installed. Below is sample memory usage chart.