Pattern to drop events if output is becoming clogged

I'm looking to find a good pattern that will allow me to start discarding events that flow through one of my output pipelines which is outputting via HTTP to a 3rd party. The HTTP connections are not always fast enough to handle the large number of logs we handle which results in that pipeline filling up and starting to back up the main filtering pipeline that sends to multiple output pipelines.

Ideally I'd like to start discarding events from that output pipeline rather than causing that slow output to start backing up the main pipeline.

Is there a good pattern I could employ to achieve this? I've looked through the patterns in the documentation for pipeline-to-pipeline communication ( https://www.elastic.co/guide/en/logstash/7.7/pipeline-to-pipeline.html#distributor-pattern ) and don't see anything that matches my use case.

logstash is designed to deliver data at least once, so pipeline-to-pipeline by itself will not solve your problem. That said you could introduce another pipeline in front of your HTTP output pipeline and connect the two together using udp output and input. If the http output gets backed up the udp input should start dropping packets, but the udp output will keep sending them.

Ahhh, an interesting idea, I'll try it out!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.