I'm looking to find a good pattern that will allow me to start discarding events that flow through one of my output pipelines which is outputting via HTTP to a 3rd party. The HTTP connections are not always fast enough to handle the large number of logs we handle which results in that pipeline filling up and starting to back up the main filtering pipeline that sends to multiple output pipelines.
Ideally I'd like to start discarding events from that output pipeline rather than causing that slow output to start backing up the main pipeline.
Is there a good pattern I could employ to achieve this? I've looked through the patterns in the documentation for pipeline-to-pipeline communication ( https://www.elastic.co/guide/en/logstash/7.7/pipeline-to-pipeline.html#distributor-pattern ) and don't see anything that matches my use case.