Hello everyone, I have a problem I've been working on for days and I can't find a compatible solution. I need to consume about 150k messages per second via a kafka input / topic / groupid. I have 8 logstash on separate machines and performers of both cpu and memory (I already have about fifteen active pipelines but they are not a problem at the moment). Basically I can consume about 8k per second of messages, using the consumer_thread value of 5 for each node (with topic partition 48). I always arrive at a maximum of about 42k messages per second and the rest goes into lag. I've tried working with the worker and batch.size parameter but they haven't changed the context. I ask if you have tested solutions that can do for me, without increasing other logstash indexers. Thank you all
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.