Excessive RAM usage, gets OOM killed in logstash

Hi!

I'm running pipelines on logstash. Everything ran smoothly, but the POD went down for a certain amount of time. In values.yml ll set JVM option as "logstashJavaOpts: "-Xmx5g -Xms3g" still end up getting killed by the OOM killer.

How would I be able to control RAM usage, to prevent logstash from being killed?

Also i am not able to find logstash log file inside the pod.

Thanks in advance for your help,

Tanveer

Welcome to the community!
RAM consumption depend on amount of data in the pipeline. Larger XML/JSON or files consume more.
You can set to: -Xms4g -Xmx4g

  • Set the minimum (Xms) and maximum (Xmx) heap allocation size to the same value to prevent the heap from resizing at runtime, which is a very costly process.

Read more

If a problem will exist, please explain your configuration input, filter, output to understand what's happening inside the pipeline.

I just set -Xms5g -Xmx5g but still pod is killed by OOM killer.
whenever i run "ps -ef" command inside the container getting
"/usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -Dfile.encodi" as output.

I am using HELM installation, so in my values.yaml i just set -Xms5g -Xmx5g
so is there any configuration which will override this setup as in this case it should have set to 5gb which is not reflecting in JAVA process.

Should be under logstashJavaOpts

We changed at same location but it's not reflecting when process start.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.