Logstash complaining about no ES nodes being available

Hi there

I have a weird problem with Logstash outputing to Elasticsearch: it keeps complaining in the logs about none of the configured Elasticsearch nodes being available (about once per second!), although it manages to get logs indexed anyway... Is it an awkward symptom to tell me it cannot keep up with the rate of messages to get indexed? There's no clear signs of saturation on the servers.

{:timestamp=>"2015-07-20T18:34:46.451000+0000", :message=>"Got error to send bulk of actions: None of the configured nodes are available: []", :level=>:error}
{:timestamp=>"2015-07-20T18:34:46.451000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>604, :exception=>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [], :backtrace=>["org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(org/elasticsearch/client/transport/TransportClientNodesService.java:279)", "org.elasticsearch.client.transport.TransportClientNodesService.execute(org/elasticsearch/client/transport/TransportClientNodesService.java:198)", "org.elasticsearch.client.transport.support.InternalTransportClient.execute(org/elasticsearch/client/transport/support/InternalTransportClient.java:106)", "org.elasticsearch.client.support.AbstractClient.bulk(org/elasticsearch/client/support/AbstractClient.java:163)", "org.elasticsearch.client.transport.TransportClient.bulk(org/elasticsearch/client/transport/TransportClient.java:356)", "org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(org/elasticsearch/action/bulk/BulkRequestBuilder.java:164)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:91)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:65)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:497)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::ElasticSearch.submit(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505)", "LogStash::Outputs::ElasticSearch.submit(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:505)", "LogStash::Outputs::ElasticSearch.submit(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504)", "LogStash::Outputs::ElasticSearch.submit(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:504)", "LogStash::Outputs::ElasticSearch.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529)", "LogStash::Outputs::ElasticSearch.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:529)", "LogStash::Outputs::ElasticSearch.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528)", "LogStash::Outputs::ElasticSearch.flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.1-java/lib/logstash/outputs/elasticsearch.rb:528)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1341)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193)", "Stud::Buffer.buffer_flush(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:193)", "RUBY.buffer_initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:112)", "org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1511)", "RUBY.buffer_initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/buffer.rb:110)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

My configuration looks like this:

output {
  elasticsearch {
    protocol => "transport"
    host => [ "logs1.xxxxxxx","logs2.xxxxxxx" ]
    cluster => "xxxxxxx_prod_infra"
    workers => 2
  }
}

Setup:

  • Logstash 1.5.2
  • Elasticsearch 1.7.0
  • Oracle Java 8u51
  • GNU/Linux Debian wheezy

Thank you for your help: let me know if you need me to provide more information.

m.

Any help on this, please?

Some things to check:

  • Does your cluster name match against you ES cluster's name?
  • Is there a firewall/network issue blocking logstash outbound port 9300 or Elasticsearch server inbound port 9300?
  • Is ES bound to port 9300?
  • Do you see any other error messages? Try running Logstash with --debug to maybe see something else.
  • Is shield enabled on your ES cluster?
  • Can you try protocol => http and see if that works instead?

Yes.

No.

Yes.

No, not installed.

Tried, but my instances processes several hundreds of messages/sec it's difficult to keep up in a terminal. I haven't spotted any other error messages than the ones I've described, though.

This does the trick (no more error messages in the logs), however this adds a significant network traffic overhead on my servers. Short term solution?

Thank you for your time,

m.

I had the same issue and when I turned on debug it said that it couldn't get node status over transport. Http worked fine. You need to allow the logstash user to have access to cluster:monitor/nodes/info. Adding that to roles.yml fixed the issue for me.

Roles? With Shield?

Yes. I didn't see that you weren't running shield. I am and that's what fixed the issue for me.