Logstash via kafka - case for pipeline usage?

Loading metric/file beat data via 2 kafka queues. Consumer are configured to eacI have a single 'filter' file with 20-30 index choices determined via if/then/else if. Could I get better performance if I split this up to a pipeline / index? Am I even thinking about pipelines correctly?

Situation: significant kafka lag on a new clustered installation. Am getting ~15K events / second but data nodes are not working very hard. Am wondering if logstash filter processing could be a bottleneck.

Hmm.. upon further reading Im thinking to be stuck with 1 pipeline since all this work shares a common input: kafka

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.