Reg: Logstash JVM OOM

Hi
Logstash version is 8.4.3 .
Logstash settings are done in below way cpu - 1000m, limits - 7 gi, request - 6 gi ,JVM is 67 %.
From the metrics of CPU and memory we can see the memory is not even reaching the 3.5 GI but JVM is almost utilized and went OOM.
We suspect that there might be alrge bytes of data more than 2 MB is causing the issue.
Can you confirm if logstash is not able to process the bytes more than 2 MB ? for the above configuration of CPU and resources?
JVM needs to be tuned based on the no of bytes of data for incoming events?
Th worker node is configured with redhat OS and cgroups are configured and we can see cgroups is killing the logstash process.
Can you suggest us what might be the cause of logstash reaching JVM eventhough sufficient memory and CPU are available.

Which plugin do you use? Grok, JSON, XML?
How complex is the filter section?
Have you use pipeline statistics?
Have you split output in pipelines?

Hi
Plug-in is json
Four pipelines are created
1 custom pipeline stout, 2 logstash,3 elasticstash.
Input filter is medium. Complex
Only logstash pipeline is used

You mean pipeline statistics of jvm ?
Can you share your opinion what caused OOM. It might be OS issue or logstash ?

Hi
Can you let em know.if any limit for 1MB file for log stash and we need to.increase the JVM values if the more bytes are sent to logstash

I have tested LS with input generator and json_lines codec with empty filter and output saving in a file. Performance are drop after ~128KB data. Maybe I had bad a data sample.

Can you provide a message sample and info how do convert to JSON as codec or json plugin in the filter?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.