Logstash not able to handle large volume of logs

We have two 32GB logstash servers, and we are receiving almost 3-4millions events per hours in logstash.
Also we have queue depth of 4GB each. Still it's not able to process these logs, and getting JVM out of heap exception. Any idea where we are dong it wrong ?? Any approach to do it in better way ??

Can you share what your config looks like?

By configuration do you mean pipeline configuration?? If yes, lot of processing is going on there. Can't share that file because of privacy concern.

But in nutshell, we are getting logs from almost 12-15 servers using filebeat as forwarder.

so any idea what should be correct configuration for our use case.

Which version of the stack are you using?

What does CPU usage look like on the Logstash hosts during processing?

How many CPU cores do they have per host?

How many events per seconds are you seeing?

What is the size of the Elasticsearch cluster you are writing to?

How many indices are you actively writing to?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.