I have 4 pipelines because I have to send data to different destinations: for example, prod and nonprod elastic search cluster, s3 bucket and some custom endpoint. Of course, I can configure one pipeline with different outputs, but I don't want one slow or non-working destination to block the whole pipeline. So I broke it down into 4 different pipelines. Most of the pipelines have the same filters. I am planning to run 4 copies of filebeat on each servers, outputting to 4 pipelines. I would better use pipeline-pipeline communication, but apparently, it also may suffer from a slow output effect. Is that a good idea to have this logstash architecture? Will I see problems with filebeats?
1 Like
IMO the correct way to do this would be with Kafka between Beats and Logstash. Data flow would be...
Beats --> Kafka ("raw" topic) --> Logstash processing pipeline --> Kafka ("processed" topic) --> Logstash output pipeline (simple pipeline for each of the 4 destinations)
Thanks. We don't want to introduce another infrastructure component into our stack.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.