Currently our team use logstash run over container with K8S cluster for purpose consume data from KAFKA messaging and push data to Elasticsearch .As we're testing performance not good enough. we're setup only 3 pods of logstash and can consume only 100,000 msg/sec.But when we're try to add new pod after 3 pods performance not be better than that. so anyone can suggest about logstash tunning for high performance consume data from kafka.
If the number of partitions of your Kafka topic is less than the number of Logstash consumers, you will always have idle Logstash pods and therefore not have any performance benefit of adding a new Logstash pod. Increase the number of partitions of your topic if your goal is to increase the indexing throughput.
Thanks for your advice.we're using 3 nodes of kafka with 60 partitions/Topic Logstash consume is 3 Pods. Logstash pod isn't idle but we need consume perform speed up from logstash with low lag by consumer group.