Logstash multiple pipelines failure with persistent queues

logstash has an at-least-once delivery model. Every event is sent to each output at least once. If an output stops accepting data then logstash has to buffer the events until the output starts accepting them again. The in-memory buffers are very limited. Using persistent queues increases the amount of buffering.

However, in any such configuration the buffers can fill up, and once they are full events will stop moving.

If you want to lose data when one output stops then you could still use the output isolator pattern, but insert a fourth pipeline which connects to the final output using udp, which happily drops packets when it has to.

                         pipeline input -- output that does not stop
                        /
input -- pipeline output
                        \
                         pipeline input -- udp output 
                                                     \
                                                      udp input -- output that stops