Two Elasticsearch outputs, when one ES cluster has issues Logstash stops shipping to both

Hello all,

I'm working on migrating from Elastic Stack 5.6.X to 6.3.X. So that I would not need to reindex logs from the old cluster tot he new one I thought I would setup a second Elasticsearch output for a few weeks and have the logs go to both clusters for a while before switching log shipping over to go directly to the new cluster.

For some reason log shipping to the new cluster has failed from Logstash a couple of times (still working on finding the root cause). Elasticsearch would just not accept any new logs.

This caused Logstash to ship logs to the old cluster as well which is less than optimal.

Question: Is is possible to have more than one output in Logstash and keep sending logs to all healthy outputs even when one output fails?

Any tips welcome :slight_smile:

Cheers,
AB

I guess you should at least show us your logstash config file. Maybe you have dependencies.

Not in a single pipeline. Logstash tries to avoid data loss, so will make sure all outputs within a pipeline are successful for a batch before proceeding with the next one.

Thanks @Christian_Dahlqvist

I guess it is this from the documentation

Having multiple pipelines in a single instance also allows these event flows to have different performance and durability parameters (for example, different settings for pipeline workers and persistent queues). This separation means that a blocked output in one pipeline won’t exert backpressure in the other.

Looks like there are no pipelines in Logstash 5.6.5. Any way to get similar functionality there?

No, not that I am aware of. That would probably require a message queue and multiple Logstash instances pulling separately from this into different clusters.

Ok. Thank you very much for the information.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.