Persistently queue input events even when output fails

Hello! I have some Logstash questions. I will provide some background info in case the questions don't make sense.

Some background:

Logstash has been part of my design for a while, but I have identified what seems like a huge limitation, and would like to have it confirmed before going a different direction.

In my design, dumb Logstash nodes act as syslog aggregators inside a big stack. Many services will send syslog messages to these aggregators, which will then output messages to RabbitMQ:
Syslog -> *Logstash* -> FW -> RabbitMQ -> Logstash -> Elasticsearch

The first Logstash does some coarse filtering and tags the messages accordingly, while the second "inner" Logstash handles more fine filtering and mutation.

The reason for this setup is network and HA-related. I'm trying to protect against failure in the firewall (FW) and RabbitMQ. Only the first *Logstash* can be modified.

I use queue.type: persisted in Logstash 7.4.2-1.

Questions

  • If Logstash loses connection to RabbitMQ, does Logstash simply turn off inputs? Is this intended behavior?
  • Does Logstash even persist the input messages, since they're defined as tcp/udp?
  • If so, could this be solved using multiple pipelines? Wouldn't the whole flow stop if the last pipeline (outputting to RabbitMQ) fails anyway?

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.