Logstash after upgrade to 5.6.3 from 2.1.3, ES output module: continues to do health checking

wondering why logstash after been upgraded to 5.6.3 from 2.1.3 seems to continuously every few seconds to do this:

[2017-11-06T09:20:07,880][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://cbdA:9200/, http://cbdB:9200/, http://cbdC:9200/]}}
[2017-11-06T09:20:07,880][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://cbdA:9200/, :path=>"/"}
[2017-11-06T09:20:07,883][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://cbdA:9200/"}
[2017-11-06T09:20:07,885][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://cbdB:9200/, :path=>"/"}
[2017-11-06T09:20:07,888][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://cbdB:9200/"}
[2017-11-06T09:20:07,891][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://cbdC:9200/, :path=>"/"}
[2017-11-06T09:20:07,894][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://cbdC:9200/"}
[2017-11-06T09:20:07,897][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//cbdA:9200", "//cbdB:9200", "//cbdC:9200"]}
[2017-11-06T09:20:08,159][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

A simple head from CLI works fine:

# curl -s XHEAD http://cbdA:9200/
{
  "name" : "cbdA",
  "cluster_name" : "mx9es",
  "cluster_uuid" : "hm26C6reTXiat-QgzHFOCg",
  "version" : {
    "number" : "5.6.3",
    "build_hash" : "1a2f265",
    "build_date" : "2017-10-06T20:33:39.012Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

This is my output config:

elasticsearch {
   #cluster => 'mxes_data'
   index => '%{[@metadata][esindex]}-%{+YYYY.MM.dd}'
   action => 'index'
   codec => 'plain'
   sniffing => false
   manage_template => false
   hosts => ['cbdA:9200','cbdB:9200','cbdC:9200']
}

No output seems to reach our ES 5.6.3 cluster anymore...

Properly because logstash isn't started listen on input filters yet... :confused:

It might not be the ES output there's an issue, turning log level to debug I find this in the plain log:

[2017-11-06T11:05:10,401][DEBUG][logstash.agent           ] Starting puma
[2017-11-06T11:05:10,403][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2017-11-06T11:05:10,404][DEBUG][logstash.api.service     ] [api-service] start
[2017-11-06T11:05:10,424][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-06T11:05:10,436][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T11:05:10,437][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]
[2017-11-06T11:05:10,437][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)
[2017-11-06T11:05:10,437][DEBUG][logstash.agent           ] ["org/jruby/RubyIO.java:3705:in `select'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:29:in `run_internal'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/reactor.rb:138:in `run_in_thread'"]
[2017-11-06T11:05:10,438][DEBUG][logstash.agent           ] Error in reactor loop escaped: Bad file descriptor - Bad file descriptor (Errno::EBADF)

and finally after many of these bad file descriptor events is seem to restart.

# /usr/share/logstash/bin/logstash -t  --path.config /etc/logstash/conf.d --path.settings /etc/logstash
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

Hints appreciated, TIA

Turned out there still were some 'breaking changes' issues in some ruby filter code
See also Logstash.agent - Error in reactor loop escaped: Bad file descriptor

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.