Hello,
This is the second "interrupted" issue that I've encountered with Logstash 5.4.3 that causes it to crash (see other post for that issue.
[2017-08-30T04:12:41,881][ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.
{"exception"=>"interrupted waiting for mutex: null"
"backtrace"=>["org/jruby/ext/thread/Mutex.java:94:in lock'" "org/jruby/ext/thread/Mutex.java:147:in
synchronize'"
"/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:190:in lazy_initialize'" "/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:268:in
each_name'"
"/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:151:in `each_name'"
[ ... ]
[2017-08-30T04:12:49,271][FATAL][logstash.runner ] An unexpected error occurred!
{:error=>#<ConcurrencyError: interrupted waiting for mutex: null>
:backtrace=>["org/jruby/ext/thread/Mutex.java:94:in lock'" "org/jruby/ext/thread/Mutex.java:147:in
synchronize'"
"/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:190:in lazy_initialize'" "/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:268:in
each_name'"
"/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:151:in each_name'" "org/jruby/RubyArray.java:1613:in
each'"
"/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:150:in each_name'" "/usr/share/logstash/vendor/jruby/lib/ruby/1.9/resolv.rb:132:in
getname'"
[ ... ]
This is Logstash 5.4.3, using Elastic's docker image at docker.elastic.co/logstash/logstash:5.4.3
I think something that may be tickling this issue is that my Logstash setup is reading from a Kafka "firehose" and is constantly falling behind, only processing about 30% of the data from Kafka (it's a test instance, so it's not critical to me that everything get processed), which means the Logstash instance stays very busy.