I am using docker.elastic.co/logstash/logstash-oss:6.0.0
with kafka output plugin
,
Logstash output kafka plugin is not pushing data into kafka when one of the kafka nodes go down or get different broker id.
1/19/2018 10:36:47 PM[2018-01-19T20:36:47,283][WARN ][org.apache.kafka.clients.NetworkClient] Connection to node 1 could not be established. Broker may not be available.
1/19/2018 11:16:44 PM[2018-01-19T21:16:44,320][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/19/2018 11:46:44 PM[2018-01-19T21:46:44,876][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/21/2018 5:08:48 PM[2018-01-21T15:08:48,645][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/21/2018 5:08:49 PM[2018-01-21T15:08:49,241][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
retries
parameter might not help in the case because the broker id can be different from the one when logstash container start with (for example kafka brokers can change from [1,2,3] to [1,2,4].
# If you choose to set `retries`, a value greater than zero will cause the
# client to only retry a fixed number of times. This will result in data loss
# if a transient error outlasts your retry count.
#
https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-kafka.html
Is there a way to force logstash to exit / kill the process in this case, this way a new logstash container will be launched with the new brokers ids and the service will start properly.
output {
kafka {
bootstrap_servers=> "kafka:9092"
topic_id=> "topic"
codec=> "json"
message_key=> "key"
}
#stdout { codec => "rubydebug" }
}
Thanks