I have a 4 node elastic search system running with an instance of logstash pointing to all four instances. I have on average about 600-700 packets of UDP/TCP info being sent through logstash a second.
My problem is that on startup of logstash I get the following message everytime:
"CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 320000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 64)"
I originally have set up the batch to have 32 workers (default) with a batch size of 500. But have tried increasing those numbers slowly up to 64 workers and batch sizes of 5000 and I still get the message above.
Additionally I have set the LS_JAVA_OPTS = "-Xms30g -Xmx30g" To have the logstash use 30gb for the JVM heap.
I also have tested slowly increasing the heap from 2gb to 30gb and still get the message anytime.
Does anyone know why this would occur? I feel like the numbers im putting for both the heap and batch are pretty excessive given the low amount of messages coming through.