Hi,
i have a Logstash instance in a docker container and it sending logs to kafka using the kafka-output-plugin
the instance configured with the default settings (including the kafka host and the topic ).
Also i am using a Command-Line Flag, --config.reload.automatic .
Version: logstash 5.5.0
Plugin Version logstash-output-kafka (5.1.7)
kafka version: kafka-0.10.1.0
when testing with rebooting the system, i have noticed that if the Logstash instance is loaded faster then Kafka i am receiving the following error:
Failed to construct kafka producer, :backtrace=>["
org.apache.kafka.clients.producer.KafkaProducer.
(org/apache/kafka/clients/producer/KafkaProducer.java:335)", "org.apache.kafka.clients.producer.KafkaProducer.
(org/apache/kafka/clients/producer/KafkaProducer.java:188)", "java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)",
"RUBY.create_producer(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.7/lib/logstash/outputs/kafka.rb:242)",
"RUBY.register(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.7/lib/logstash/outputs/kafka.rb:178)",
"RUBY.register(/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9)",
"RUBY.register(/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:41)",
"RUBY.register_plugin(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281)",
"RUBY.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)",
"RUBY.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292)",
"RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301)",
"RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:226)",
"RUBY.start_pipeline(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}
xxxxxxxx@localhost.xxxxxx | 09:43:35.799 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
which is fine by itself, but then everything getting stuck. the logstash not sending an actual error and the container keeps running but it stop react and not retry to connect.
i have tried to play with the plugin settings such as metadata age but couldn't see any change in behavior .
but when i have removed the --config.reload.automatic flag, it seems to try retiring the connection although even if it succeeds the data never received in the Kafka topic .
p.s. same behavior was seen when i manually reset the Logstash instance while kafka is turned off and restored after logstash started (if i reset the logstash then all work normally ) .
help will be appreciated both with the command line flag that gets every thing stop reacting and with the data that never gets to the topic without aswell.