Does one failed output kill / stop the whole pipeline?

Hi,

I have a logstash output like this,

output {
tcp {
mode => "client"
host => "server01"
port => 5400
codec => "line"
}
tcp {
mode => "client"
host => "server02"
port => 5400
codec => "line"
}
file {
path => "/logs/%{[filename]}-%{+YYYY.MM.dd}"
write_behavior => "append"
file_mode => 0600
}
elasticsearch {
hosts => ["http://server3:9200"]
index => "%{[logtype]}-%{+YYYY.MM.dd}"
}
}

It seems as if one output fails, the whole pipeline stops? If, for example, a tcp output fails, it will retry about 20 times, then everything stops. Is this default behavior? and if so, is there any way around it?

Best Regards,
Bjorn

If an output cannot write events then it will queue them. If the queue fills then back-pressure will shut down the pipeline. That is working as expected.

Thanks Badger

That is working as expected.

That seems logical, but will it stop to process all outputs if one fail? Let's say that output#1#2 works and #3 fails. Will it stop at this output and retry it until it works again and never reach output#4?

Thanks again.

/Bjorn

Yes, there is a limited amount of queue space. Once that is filled processing will stop.

1 Like

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.