Hi there,
I have 3 Pipelines running, one is a syslog listener, the other 2 are JDBC inputs.
As per title says, i can't safely shutdown logstash as it hangs while trying to shutdown itself.
Both if i try to restart the pipelines using kill -SIGHUP <logstash_pid>, or shutting down entirely using systemctl, it just hangs and loops throwing "ShutdownWatcherExit". So i have to force the shutdown. How can i
[2019-05-30T11:53:11,956][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>112, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>113, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>108, "name"=>"[jdbc_mxp_svil_input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>109, "name"=>"[jdbc_mxp_svil_input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>110, "name"=>"[jdbc_mxp_svil_input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>111, "name"=>"[jdbc_mxp_svil_input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
[2019-05-30T11:53:17,005][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>112, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>113, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>108, "name"=>"[jdbc_mxp_svil_input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>109, "name"=>"[jdbc_mxp_svil_input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>110, "name"=>"[jdbc_mxp_svil_input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>111, "name"=>"[jdbc_mxp_svil_input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
[2019-05-30T11:53:22,045][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>112, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>113, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>108, "name"=>"[jdbc_mxp_svil_input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>109, "name"=>"[jdbc_mxp_svil_input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>110, "name"=>"[jdbc_mxp_svil_input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>111, "name"=>"[jdbc_mxp_svil_input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
[2019-05-30T11:53:27,082][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>112, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>113, "name"=>"[jdbc_mxp_svil_input]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>108, "name"=>"[jdbc_mxp_svil_input]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>109, "name"=>"[jdbc_mxp_svil_input]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>110, "name"=>"[jdbc_mxp_svil_input]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}, {"thread_id"=>111, "name"=>"[jdbc_mxp_svil_input]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
I think that that's due to the fact that the JDBC inputs are scheduled to run every 1 minute, so the scheduler hangs if i try to reload it.
is my supposition correct?
Is there any workaround to this?
Or am i missing something instead?
Many thanks for your help!
Alessandro