Using LogStash 6.7.1
It really feels like the memory consumption increases day by day. So:
Is really there a memory leak?
I read in some of the "RELEASE NOTES" some comment about minimizing it, so I assume it's real. Has it been fixed completely in higher versions?
My pipelines use, a lot, the filter plugin "ruby". Could it be related? IIRC, the memory leak was related to JRuby... Is JRuby involved only when the filter plugin "ruby" is being used?
My setup is composed by 10 identical pipelines. Does that setup increases the effect of the memory leak? Would it be better if I reduce them to 5 pipelines, for example?
Can you show your config or list the plugins you are using? If you have any plugins that rely on files being read the reported memory usage could be growing due to files being cached by the OS, which is not a memory leak. If that was the case i would expect reported memory use to go down once Logstash is restarted.
The key aspect is that I use the Ruby filter plugin, which keeps in memory several Hashes, where each key value is a combination of hostname and logfile for the source input data. That is a finite number, so at some point it should have created all possible values.
I also guess those Hashes are eventually recycled when the value of the key is repeated, as they are class variables, correct? Ruby is not keeping all of them in memory forever, right?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.