Recommended inflight events max exceeded still persists after increasing batch and heap

I have a 4 node elastic search system running with an instance of logstash pointing to all four instances. I have on average about 600-700 packets of UDP/TCP info being sent through logstash a second.

My problem is that on startup of logstash I get the following message everytime:

"CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 320000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 5000), or changing the number of pipeline workers (currently 64)"

I originally have set up the batch to have 32 workers (default) with a batch size of 500. But have tried increasing those numbers slowly up to 64 workers and batch sizes of 5000 and I still get the message above.

Additionally I have set the LS_JAVA_OPTS = "-Xms30g -Xmx30g" To have the logstash use 30gb for the JVM heap.

I also have tested slowly increasing the heap from 2gb to 30gb and still get the message anytime.

Does anyone know why this would occur? I feel like the numbers im putting for both the heap and batch are pretty excessive given the low amount of messages coming through.

2 Likes

I have just learned that the logstash source code (SOURCE) has preset logic that if your system goes over 10000 inflight events it will warn you regardless if your system handles it well or not. So this is not an issue, nor a warning that can be avoided if you're settings are high enough.

MAX_INFLIGHT_WARN_THRESHOLD = 10_000

if max_inflight > MAX_INFLIGHT_WARN_THRESHOLD
@logger.warn("CAUTION: Recommended inflight events max exceeded! Logstash will run with up to #{max_inflight} events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently #{batch_size}), or changing the number of pipeline workers (currently #{pipeline_workers})", default_logging_keys)
end

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.