I am getting below logstash log message that stopping data flow for some reason I had to restart logstash and then re-occur. Please advise.
[org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
[2020-09-28T09:28:48,673][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Sending LeaveGroup request to coordinator xxx-xxx-xxxx.dco.local:9092 (id: 2147483647 rack: null)
Sounds like back pressure from the pipeline is causing the input to suffer a timeout. Without knowing that the pipeline and outputs are doing it is impossible to improve on the advice in the error message.
I suggest to add below on Logstash (7.1.1) input Kafka (2.2.1), not sure if this will resolve the issue with the poll time out and the values may need change as its guessing also I may not need them all. Can you confirm please.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.