Attempted to send a bulk request to Elasticsearch

Hi,

My ELK stack has been running great for months, until recently. I've been receiving the following error:

timestamp=>"2018-10-10T22:00:08.928000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://172.27.208.42:9200"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :error_message=>"[400] {"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"},"status":400}", :error_class=>"Elasticsearch::Transport::Transport::Errors::BadRequest", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:201:in __raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:312:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:172:in safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:129:in worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:232:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :level=>:error}

ES is up,and recheable:

nc -zv 172.27.208.42 9200
Connection to 172.27.208.42 9200 port [tcp/wap-wsp] succeeded!

[root@gagdlog03 elasticsearch]# curl -XGET http://172.27.208.42:9200
{
"name" : "NODE3",
"cluster_name" : "ELK1",
"cluster_uuid" : "0ue7OXoTRwGLhiJbply7gg",
"version" : {
"number" : "2.4.6",
"build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
"build_timestamp" : "2017-07-18T12:17:44Z",
"build_snapshot" : false,
"lucene_version" : "5.5.4"
},
"tagline" : "You Know, for Search"
}

[2018-10-10 21:57:44,765][INFO ][node ] [NODE3] starting ...
[2018-10-10 21:57:45,056][INFO ][transport ] [NODE3] publish_address {172.27.208.42:9300}, bound_addresses {172.27.208.42:9300}
[2018-10-10 21:57:45,066][INFO ][discovery ] [NODE3] ELK1/PVCMDPWcQMWNCqBiRajvbQ
[2018-10-10 21:57:51,198][INFO ][cluster.service ] [NODE3] detected_master {NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}, added {{NODE1}{UeA4iQ-5QdyefCgyEZRJHQ}{172.27.208.41}{172.27.208.41:9300},{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300},{NODE4}{rFxXSauOQvyQ-4MJ6Agy4g}{172.27.208.59}{172.27.208.59:9300}{data=false, master=false},{NODE2}{iNomYyBiSHila2WqWVAH6Q}{172.27.208.40}{172.27.208.40:9300},{NODE6}{V3FVO-cfQNe8IkgHqwu3yQ}{172.27.209.77}{172.27.209.77:9300},}, reason: zen-disco-receive(from master [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}])
[2018-10-10 21:58:15,071][WARN ][discovery ] [NODE3] waited for 30s and no initial state was set by the discovery
[2018-10-10 21:58:15,123][INFO ][http ] [NODE3] publish_address {172.27.208.42:9200}, bound_addresses {172.27.208.42:9200}
[2018-10-10 21:58:15,124][INFO ][node ] [NODE3] started
[2018-10-10 21:58:35,405][WARN ][cluster.service ] [NODE3] cluster state update task [zen-disco-receive(from master [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}])] took 44.2s above the warn threshold of 30s
[2018-10-10 21:58:59,038][WARN ][transport ] [NODE3] Received response for a request that has timed out, sent [66840ms] ago, timed out [36840ms] ago, action [internal:discovery/zen/fd/master_ping], node [{NODE5}{-qgddYGKTNSpIhDX3PO3AA}{172.27.209.78}{172.27.209.78:9300}], id [19]

"[400] {"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"},"status":400}

Any idea?

It seems like updating your cluster state is taking a very long time. How many indices/shards do you have in the cluster? What is the size of your cluster state?

thanks for your reply.

Cluster status:
{
"cluster_name" : "ELK1",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 6,
"number_of_data_nodes" : 5,
"active_primary_shards" : 1436,
"active_shards" : 2872,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

When you restart the logstash service, it works fine for 20minutes.

How much heap do your nodes have? Do you see any long or frequent GC in the logs?

Not frequent GC reported. Java heap for nodes are configured at 26Gb.

gadlog00:
498 24896 1 25 Sep17 ? 6-11:23:36 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
gadlog04:
497 19922 1 1 Sep11 ? 09:26:16 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
gadlog05:
498 3241 1 27 Sep17 ? 7-02:13:54 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
gadlog01:
498 24356 1 62 Oct03 ? 5-16:47:54 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
gadlog03:
495 58325 1 59 Oct10 ? 1-03:55:24 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch
gadlog02:
495 27890 1 70 Oct03 ? 6-12:08:33 /usr/bin/java -Xms26g -Xmx26g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.4.6.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch

The problem only occurs for the node 3. I think the cluster is healthy.
Solved: I upgraded java version and now works fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.