Loading metric/file beat data via 2 kafka queues. Consumer are configured to eacI have a single 'filter' file with 20-30 index choices determined via if/then/else if. Could I get better performance if I split this up to a pipeline / index? Am I even thinking about pipelines correctly?
Situation: significant kafka lag on a new clustered installation. Am getting ~15K events / second but data nodes are not working very hard. Am wondering if logstash filter processing could be a bottleneck.