Each output has its own retry mechanism - if an event is in an outputs retry queue then the event is still referenced and the JVM does not garbage collect it.
Outputs like elasticsearch will not accepts new events while the are events in the retry queue. This has a back-pressure effect that pushes back all the way to the inputs - they stop accepting new events from upstream.
When I'm working with http output plugin it's not handling backpressure properly. Can someone help with this?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.