I've seen a few similar notes on this topic, but haven't had any success in finding a solution as of yet.
We have Logstash 6.6.0 processes that run on all of our VMs and send their logs to Kafka, specifically a Kafka VIP that has 10 Kafka brokers behind it. I've noticed that when a Kafka broker is down, either intentionally or not, whatever Logstash process that was talking to it enters a loop of retries and will not connect again until a restart of the process forces the rebalance. The kafka input plugin has a useful session_timeout_ms that forces a rebalance, but I haven't had any luck finding something similar for the kafka output plugin.
This repeats in the logs when the broker is pulled:
[2019-05-14T15:56:29,326][WARN ][org.apache.kafka.clients.NetworkClient] [Producer clientId=producer-1] Connection to node 1 (<broker_name>) could not be established. Broker may not be available.
Current configuration of the processes's output:
output { kafka { bootstrap_servers => "<kafka_vip>:9092" topic_id => "${OUTPUT_TOPIC}" compression_type => "lz4" message_key => "%{timestamphashkey}" acks => "1" batch_size => 5000 codec => "json" } }