Stalling thread info jdbc

Hello,
Whenever i try to make some changes in my configs, i get:

 [2019-01-17T09:04:25,959][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=&g`t;:main}
 [2019-01-17T09:04:31,144][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>67, "name"=>"[main]<file", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/filewatch-0.9.0/lib/filewatch/watched_file.rb:63:in `file_close'"}, {"thread_id"=>68, "name"=>"[main]<file", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/filewatch-0.9.0/lib/filewatch/watch.rb:288:in `sleep'"}, {"thread_id"=>72, "name"=>"[main]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>73, "name"=>"[main]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}], ["LogStash::Filters::Grok", {"patterns_dir"=>["/home/logger/logfiles_monitoring/logstash-6.2.4/config/confFiles/grokPatterns"], "match"=>{"message"=>"\\[%{WAFDATE:date}\\]\\ \\[(?<message_type>\\:*\\w+(\\:\\w+)?)\\]\\ \\[pid\\ %{NUMBER:pid}:tid\\ %{NUMBER:tid}\\]\\ \\[client\\ (?<client_ip>%{IP}:%{INT})\\]\\ (\\[client\\ %{IP:client_ip_2}\\]\\ )?ModSecurity:\\ (?<security_mod_message2>((?!\\[file).)*)\\[file\\ \\\"(?<file>((?!\\\"\\]\\ ).)*)\\\"\\]\\ \\[line\\ \\\"%{NUMBER:line}\\\"\\]\\ \\[id\\ \\\"%{NUMBER:rule_id}\\\"\\]\\ (\\[rev\\ \\\"%{NUMBER:rev}\\\"\\]\\ )?(\\[rev\\ \\\"%{NUMBER:rev}\\\"\\]\\ )?(\\[msg\\ \\\"(?<msg>((?!\\\"\\]\\ ).)*)\\\"\\]\\ )?(\\[data\\ \\\"(?<data>((?!\\\"\\]\\ ).)*)\\\"\\]\\ )?(\\[severity\\ \\\"%{WORD:severity}\\\"\\]\\ )?(\\[ver\\ \\\"(?<ver>((?!\\\"\\]\\ ).)*)\\\"\\]\\ )?(\\[maturity\\ \\\"%{NUMBER:maturity}\\\"\\]\\ )?(\\[accuracy\\ \\\"%{NUMBER:accuracy}\\\"\\]\\ )?(?<tags>(\\[tag\\ \\\"((?!\\\"\\]\\ ).)*\\\"\\]\\ )+)(?<security_mod_message>((?!\\[hostname).)*)(\\[hostname\\ \\\"(?<hostname>((?!\\\"\\]\\ ).)*)\\\"\\]\\ )?(\\[uri\\ \\\"(?<uri>((?!\\\"\\]\\ ).)*)\\\"\\]\\ )?\\[unique_id\\ \\\"(?<unique_id>((?!\\\"\\]\\ ).)*)\\\"\\](,\\ referer: %{GREEDYDATA:referer})?"}, "id"=>"72e3d7bfc92cb5a738982885729aa82ac876e2ec044a09de897791aef53d9686"}]=>[{"thread_id"=>53, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>54, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>55, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>57, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>58, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>59, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>60, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>61, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>62, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>63, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>64, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}], ["LogStash::Filters::Mutate", {"remove_field"=>"message", "id"=>"5b68f2dbfd58a0413989a84fc3e02204cdad2886828ff3335b28adbd34f7fe0b"}]=>[{"thread_id"=>56, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}]}}
[2019-01-17T09:04:31,165][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
    [2019-01-17T09:05:06,064][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>72, "name"=>"[main]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}, {"thread_id"=>73, "name"=>"[main]<jdbc", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler.rb:170:in `join'"}], ["LogStash::Filters::Mutate", {"convert"=>{"entries"=>"integer", "mintime"=>"integer", "maxtime"=>"integer", "avgtime"=>"integer", "mediantime"=>"integer"}, "id"=>"9032c7435b045e31f6eed555cefaf0d4f459e5be91dd7ea2c6bd3e16e2ce0fd9"}]=>[{"thread_id"=>53, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>54, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>55, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>56, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>57, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:in `read_batch'"}, {"thread_id"=>58, "name"=>nil, "current_call"=>"[...]/logstash-

And soo on

If i kill logstash and turn on again, it's working fine. This problem is during reload pipeline.
I have some configs when i use jdbc filter also.

The jdbc input works on a schedule.
The scheduler will run the jdbc fetch operation in a worker thread from its pool. The scheduler itself is running in a thread created by Logstash and this thread waits for the pool thread to complete - while it is waiting it is not able to respond to the shutdown signal because it is in a wait state and not running.
The pool thread is executing the jdbc fetch and so it may be waiting on a response or processing the response. If waiting on a response the pool thread can't check the shutdown flag. If processing a response, then two stages are executed, 1) a function in a 3rd party library is unpacking the JDBC records and preparing them for the Ruby world (from Java) - the 3rd party code is oblivious to the shutdown signal, it is not in scope at that time and 2) the prepped records are processed by some Logstash code to create and push events to the pipeline. It is only in this last stage that we can check for the shutdown signal but, sadly, we don't exit the record processing loop early if the shutdown signal is detected because of sql_last_value semantics and records not guaranteed to be ordered.

If you are in the developing and debugging your config stage then I suggest that you limit your query to a small set of records ~1000 or cores * batch size (125 by default) - doing this should make the jdbc input feel like it is responding quickly to the shutdown signal.

1 Like

Thanks. You helped me a lot

Cool. Glad to help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.