is there a way to limit memory consumption of logstash 7.5? In our case logstash 'eats up' all the memory so that Elasticsearch itself gets killed on the same host (OOM killer).
From 7.8 onwards a possibility is there:
but it is not documented for lower versions.
Thanks for any idea!
I did - and again I get Elasticsearch killed since no ram left (logstash / Elasticsearch / kibana running on the same machine). Could it be some kind of memory leak of logstash / of a logstash plugin? Mainly using the tcp plugin for logstash and some filter definitions.
What are the specs of the machine? How much memory is set for the heap of Logstash and Elasticsearch? Please share it.
Also how did you track the cause to Logstash since it is Elasticsearch that is getting killed?
If you are constantly getting OOM, maybe the specs of your machine does not fit your use, the JVM uses memory besides the HEAP, Logstash and Elasticsearch needs more memory than is specified in the Heap settings.
Thanks for asking. These are my specs:
1 cluster built of 3 all-in-one nodes (meaning: elasrticsearch with master & data roles, kibana and logstash on the same node) and with 2 other nodes (only as data nodes).
each nodes has:
Disk : 40GB
I would like to give a final solution on this. The reason for getting out of memory kills by the kernel was elastalert that is using at peaks nearly double to much memory than Elasticsearch itself.
Sadly I do not know how to influence this bad behaviour of elastalert.