Limit logstash 7.5 memory usage

Hi there,
is there a way to limit memory consumption of logstash 7.5? In our case logstash 'eats up' all the memory so that Elasticsearch itself gets killed on the same host (OOM killer).
From 7.8 onwards a possibility is there:

but it is not documented for lower versions.
Thanks for any idea!

AFAIK, all 7.x support this. Check does your version has jvm.options. Create a file jvm.options with values:

Do not modify the root jvm.options file. Use files in jvm.options.d/ instead.
Copy, set to the same values, save, restart the process.
-Xms2g
-Xmx2g

1 Like

I did - and again I get elasticsearch killed since no ram left (logstash / elasticsearch / kibana running on the same machine). Could it be some kind of memory leak of logstash / of a logstash plugin? Mainly using the tcp plugin for logstash and some filter definitions.

Cheers!

LS memory concussion depends on data/queue. How many data records are you processing per minute? How many GBs is your limit?
It might be a memory leak, depend on version. Not sure.

Can someone from Elastic please tell if versions 7.5.x do have some kind of know memory leaks involved?
Thank you!

Check release notes:

  • Fix: eliminates a crash that could occur at pipeline startup when the pipeline references a java-based plugin that had been installed via offline plugin pack #11340

Also enable temporarily debug info to see any cause.
For sure, Elastic team can advice more.

1 Like

What are the specs of the machine? How much memory is set for the heap of Logstash and Elasticsearch? Please share it.

Also how did you track the cause to Logstash since it is Elasticsearch that is getting killed?

If you are constantly getting OOM, maybe the specs of your machine does not fit your use, the JVM uses memory besides the HEAP, Logstash and Elasticsearch needs more memory than is specified in the Heap settings.

Thanks for asking. These are my specs:
1 cluster built of 3 all-in-one nodes (meaning: elasrticsearch with master & data roles, kibana and logstash on the same node) and with 2 other nodes (only as data nodes).
each nodes has:
30GB RAM
8 VCPU
Disk : 40GB
SSD: 600GB

limits.conf:
|Elasticsearch|-|nofile|65536|
|Elasticsearch|-|nproc|4096|
|logstash|-|nproc|1024|

/etc/logstash/jvm.options:
-Xms2g
-Xmx2g

/etc/Elasticsearch/jvm.options:
-Xms12g
-Xmx12g

logstash.conf:
input {
tcp {
port => 5518
codec => "fluent"
...

And:
[14:54] Kraenzlein, Ralph
java.io.IOException: Too many open files
in the logstash log

here all the events come in (40-60GB per day)
cheers

Check this Logstash not closing connections correctly · Issue #4225 · elastic/logstash · GitHub

I would like to give a final solution on this. The reason for getting out of memory kills by the kernel was elastalert that is using at peaks nearly double to much memory than Elasticsearch itself.
Sadly I do not know how to influence this bad behaviour of elastalert.
cheers

Simply, out of memory or simple the memory is overtaken by other app. However the most important is you have solved the issue. Well done. :+1:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.