Handle timeouts with the syslog output


We have a use case of sending logs to two ELK stacks and letting the first ELK stack forward some events to a second ELK stack.

So messages arrive at ELK-Stack1. Logstash receives the messages and outputs to its Elasticsearch instance and at the same time sends the same message to ELK-Stack2's Logstash over port 514 TCP.

We noticed however, that if the second ELK stacks's logstash instance for some reason is unable to reach, the first ELK stack's logstash will queue the events until the persistent queue is full or the second logstash instanse responds again.

Is there a nice way to handle this? For example, attach a tag to the message when we are unable to forward the messages from ELK stack 1 to ELK stack 2? Then forward the events to the first ELK stacks's Elasticsearch instance and then manually move the messages with the tag to the second ELK stack?

No. This is working as designed. It sounds like you want something like a DLQ, but that has only been implemented for the elasticsearch output.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.