Kafka output gets permanently stuck after temporary wrong port — pipeline-to-pipeline address unavailable & retrying_send stall
Hi,
I am facing an issue with Logstash Kafka output when the Kafka broker becomes temporarily unreachable.
Error :
[2025-12-05T12:41:32,869][WARN ][org.logstash.plugins.pipeline.AbstractPipelineBus][6c439cc070c5500b3acef737847adbb5978f7d66189e9c4591ef93fa3f24f09f] Attempted to send event to 'forward_kafka but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
Problem Scenario
I intentionally changed the Kafka port to a wrong port (ex: 7879).
Logstash started showing Kafka connection errors (expected).
After a few seconds, I restored the correct port 7878 again.
Even after restoring the correct port, Logstash never recovers and it stops forwarding logs permanently.
Restarting Logstash fixes it, but I want Logstash to automatically recover without restart.
My Logstash pipeline forwards logs to Kafka using this output configuration:
What exactly you did and how are your pipelines connected? You mentioned that you are you using pipeline-to-pipeline communicaton, but you didn't share any of your configurations.
Please share your pipelines.yml file and the inputs and ouputs of all your pipelines running on the same instance.
Iam sending logs to the kafka forward pipeline like this.. To a separate pipeline from main_input pipeline. main_input pipeline is the entry point of logs
output {
if ("forward_event" not in [tags])
{
elasticsearch {
hosts => ["localhost:9200"]
index => "ScribblerCurrent"
}
}
if ("forward_event" in [tags]) {
pipeline {
send_to => forward_Kafka
}
}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.