Logstash.agent - Error in reactor loop escaped: Bad file descriptor

I've upgraded to 5.6.3 from 2.1.3 patched my config for breaking changes:

# /usr/share/logstash/bin/logstash -t  --path.config /etc/logstash/conf.d --path.settings /etc/logstash
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

But when trying to launch logstash it just keeps restarting, this is what it says at debug log level:

[2017-11-06T12:42:06,748][DEBUG][logstash.agent           ] Starting puma
[2017-11-06T12:42:06,749][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2017-11-06T12:42:06,749][DEBUG][logstash.api.service     ] [api-service] start
[2017-11-06T12:42:06,765][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-06T12:42:06,779][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T12:42:06,780][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]
[2017-11-06T12:42:06,780][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T12:42:06,781][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]
[2017-11-06T12:42:06,781][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T12:42:06,781][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]
[2017-11-06T12:42:06,781][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T12:42:06,781][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]

It never starts any listenerts, what might be wrong since it logs mulitple EBADF in puma reactor?

Okay it turned out I still had a few breaking changes issues in some of my filters. Fixing those logstash will start up, but something kills it when processing events, just hard to tell in which filter config file this happens:

[2017-11-06T14:11:29,538][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-11-06T14:11:29,569][INFO ][logstash.pipeline        ] Pipeline main started
[2017-11-06T14:11:29,572][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>"0.0.0.0:25826"}
[2017-11-06T14:11:29,589][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-11-06T14:11:29,594][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:25826", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2017-11-06T14:11:29,604][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-06T14:11:35,084][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<NameError: undefined local variable or method `dotfile' for #<AwesomePrint::Inspector:0x2d628e22>>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/inspector.rb:163:in `merge_custom_defaults!'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/inspector.rb:50:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/awesome_print-1.8.0/lib/awesome_print/core_ext/kernel.rb:9:in `ai'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-rubydebug-3.0.4/lib/logstash/codecs/rubydebug.rb:39:in `encode_default'", "org/jruby/RubyMethod.java:120:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-rubydebug-3.0.4/lib/logstash/codecs/rubydebug.rb:35:in `encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/base.rb:50:in `multi_encode'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/base.rb:50:in `multi_encode'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:434:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:433:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:381:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"]}

turning off these predefined/exstra outputs seem to have fix the last issue:

  if [@metadata][output-stdout] == 'true' {
    stdout { codec => rubydebug{metadata => true} }
  }

  if [@metadata][output-debug] == 'true' {
    file {
      path => '/tmp/logstashed.debug'
      flush_interval => 30
      codec => rubydebug { metadata => true }
    }
  }

  if '_grokparsefailure' in [tags] {
    file {
      path => '/tmp/logstashed.grokfailures'
      flush_interval => 30
      codec => rubydebug
    }
  }

Just wondering if it because I've got dot-separated field names...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.