Since we patched from 5.6.5 to 5.6.8 we're seeing logstash memory leakage so it'll now run out of memory after 6-10 days, nothing changed in our filters etc. So assuming there's a leak somewhere maybe. Before memory usage was steady (see memory trend graph).
Anyone else seeing this or know of a possible issue/workaround/fix, please let me know.
TIA
# rpm -qi logstash
Name : logstash Relocations: /
Version : 5.6.8 Vendor: Elasticsearch
Release : 1 Build Date: Fri 16 Feb 2018 07:11:44 PM CET
Install Date: Thu 22 Feb 2018 12:26:39 PM CET Build Host: packer-virtualbox-iso-1518356860
Any info you're willing to provide about how your pipelines are configured, including which plugins they use (and what versions), heap settings, how many workers, whether you've got persistent queueing enabled, etc, will be helpful in narrowing down the surface area of the problem.
check your logstash config! by default queue is set to memory and with memory leackage it fulls all heap and logstash goes down.
I`he 16Gb Ram only for Logstash. I start logstash, add one by one beats-clients (near 250 clients). it works fine for 3-4 weeks, but ram usage grows from 700 Mb to 16Gb and after that i have OOM error.
After restart, all 16Gb ram fulls in few minutes, because lots of client send a huge amount of metrics and logs, and ,by default logstash config, all of it puts to the ram.
After i change queue to persistant mode and set path for queue, i easily restart logstash without causing any errors. Only problem is that logstash need some time to process all queued info in the same time new metrics and logs puts to queue.
So, i think you need to set queue to persisted mode in logstash.yml in # ------------ Queuing Settings -------------- section
example
queue.type: persisted
path.queue: /home/queue/logstash
queue.max_bytes: 8gb
After that change i have 2 Gb for logstash. after start service it grows from 700Mb to 1900 and start to clean itself. 1900 Mb grows to 1964 Mb and returns to 1900 Mb, And it works in that way for 2 weeks till now without OOM error.
Got around 100 clients, but it seems grok plugin 4.0.3 fixed this issue for me.
Considering changing from grok to dissec at some point...
Drop in memory foot print @2018-03-21 00:00 was a restart due to OOM, then at 10:00 Iogstash got restarted w/grok 4.0.3 and memory growth seem more flat since
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.