Logging events lost when Elastic Search connection goes down


We're trying filebeat, forwarding logs to logstash (for JSON parsing) and then onwards into Elastic Search (ES). One of the scenarios we're testing is connection between logstash and ES going down and then the logstash service being terminated.

In this scenarios, log events buffered in logstash are lost. Can someone confirm that's the expecetd behaviour and whether we can work around it, or minimize the loss somehow ?


LS currently doesn't persist its internal queue. Until it does I don't see how you can completely eliminate the risk of LS losing any messages.

Is there a way to minimize it maybe? By stopping the beats listener when the upstream has closed or any other suggestion?

I've come across the congestion_threshold parameter, how does that work?