[Logstash] ElasticSearch unrecheable


(Dani Piñol) #1

Hi,

My ELK stack has been running great for months, until recently. I've been receiving the following error:

timestamp=>"2018-10-10T22:00:08.928000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://172.27.208.42:9200"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :error_message=>"[400] {"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"},"status":400}", :error_class=>"Elasticsearch::Transport::Transport::Errors::BadRequest", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:201:in __raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:312:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:innon_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:172:insafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:86:inretrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:130:inworker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:129:in worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:114:inmulti_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "org/jruby/RubyHash.java:1342:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:232:inworker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :level=>:error}

ES is up,and recheable:

nc -zv 172.27.208.42 9200
Connection to 172.27.208.42 9200 port [tcp/wap-wsp] succeeded!

[root@gagdlog03 elasticsearch]# curl -XGET http://172.27.208.42:9200
{
"name" : "NODE3",
"cluster_name" : "ELK1",
"cluster_uuid" : "0ue7OXoTRwGLhiJbply7gg",
"version" : {
"number" : "2.4.6",
"build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
"build_timestamp" : "2017-07-18T12:17:44Z",
"build_snapshot" : false,
"lucene_version" : "5.5.4"
},
"tagline" : "You Know, for Search"
}

[2018-10-10 21:57:44,765][INFO ][node ] [NODE3] starting ...
[2018-10-10 21:57:45,056][INFO ][transport ] [NODE3] publish_address {172.27.208.42:9300}, bound_addresses {172.27.208.42:9300}
[2018-10-10 21:57:45,066][INFO ][discovery ] [NODE3] ELK1/PVCMDPWcQMWNCqBiRajvbQ
[2018-10-10 21:57:51,198][INFO ][cluster.service ] [NODE3] detected_master {NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}, added {{NODE1}{UeA4iQ-5QdyefCgyEZRJHQ}{172.27.208.41}{172.27.208.41:9300},{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300},{NODE4}{rFxXSauOQvyQ-4MJ6Agy4g}{172.27.208.59}{172.27.208.59:9300}{data=false, master=false},{NODE2}{iNomYyBiSHila2WqWVAH6Q}{172.27.208.40}{172.27.208.40:9300},{NODE6}{V3FVO-cfQNe8IkgHqwu3yQ}{172.27.209.77}{172.27.209.77:9300},}, reason: zen-disco-receive(from master [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}])
[2018-10-10 21:58:15,071][WARN ][discovery ] [NODE3] waited for 30s and no initial state was set by the discovery
[2018-10-10 21:58:15,123][INFO ][http ] [NODE3] publish_address {172.27.208.42:9200}, bound_addresses {172.27.208.42:9200}
[2018-10-10 21:58:15,124][INFO ][node ] [NODE3] started
[2018-10-10 21:58:35,405][WARN ][cluster.service ] [NODE3] cluster state update task [zen-disco-receive(from master [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}])] took 44.2s above the warn threshold of 30s
[2018-10-10 21:58:59,038][WARN ][transport ] [NODE3] Received response for a request that has timed out, sent [66840ms] ago, timed out [36840ms] ago, action [internal:discovery/zen/fd/master_ping], node [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}], id [19]


(Lee Hinman) #2

The ES error is encapsulated in the Logstash error:

There are no requests in the bulk indexing request, I don't know if Logstash sends empty bulks by default (you may want to try asking in the Logstash discuss forums for more Logstash expertise)


(Dani Piñol) #3

Thanks. I'll try!


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.