Logstash persistent queues with pipelines behaivour

Hi,

I have the following configuration:

1 logstash machine l1 with 4 pipelines, p1 p2 p3 and p4
p1 has the input, does some procesing and then outputs to pipelines p2 p3 and p4

p2 does some procesing and outputs to syslog
p3 does some procesing and outputs to another logstash machine l2
p4 does some procesing and outputs to another logstash machine l3

P1 p2 p3 and p4 are configured with persistent queues.

I have some questions about the queues:

  1. and event arrives to p1, and is stored in the queue, then theres some procesing (filters) and finally outputs with something like

output { pipeline { send_to => ["p2", "p3", "p4"] } }

What happens if the output to p2 and p3 succeeds and fails to p4? will the event be still not ack in the p1 queue?

  1. In case p4 cannot do its output, for instance because it lost contact with l3, is it ok to think that persistent queue in p4 will get to it max size, then p4 will stop accepting events, so as p1 cannot output to p4 the p1 queue will start filling until its maxsize, and then, it will stop accepting events, so p2 and p3 will also stop receiving events?

  2. If I change configuration, and configure p1, p2 and p3 with persistent queues, and p4 with memory queue. In case similar to 2. will i still be able to continue sending events to p2 and p3 and will i lost events sent to p4?

thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.