Attempted to send a bulk request to Elasticsearch failed ClientProtocolException

I am using Elasticsearch-2.3.1 and logstash-2.3.4 since quite long. Today I found a following warning in Elasticsearch. I was trying to copy the logs but I think I press Ctrl+C and then an option appear to stop a job, I forgot the detail/name but I choose No/n. Then I also restarted the Elasticsearch but thereafter I am getting following logstash errors.

Can you kindly guide me what is the issue?

NOTE: Elasticsearch url http://localhost:9200/ is active and search is also working. Logstash and Elasticsearch both are running on same server. No changes are made on server including java version etc.

Elasticsearch Warning:

[2021-10-11 21:25:33,048][WARN ][cluster.routing.allocation.decider] [Quentin Quire] high disk watermark [90%] exceeded on [MSVQqunPRa2EwaDl03eGlw][Quentin Quire][C:\Elasticsearch-2.3.1\data\Elasticsearch\nodes\0] free: 2.6gb[8.8%], shards will be relocated away from this node

Logstash error:

[31mAttempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200", "http://localhost:9300"]', but an error occurred and it failed! Are you sure you can reach Elasticsearch from this machine using the configuration provided? {:error_message=>"The server failed to respond with a valid HTTP response", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:in call'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in call'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in call_once'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in code'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:in perform_request'", "org/jruby/RubyProc.java:281:in call'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in perform_request'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/Elasticsearch-transport-1.0.18/lib/Elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:in perform_request'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/Elasticsearch-api-1.0.18/lib/Elasticsearch/api/actions/bulk.rb:90:in bulk'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-Elasticsearch-2.7.1-java/lib/logstash/outputs/Elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-Elasticsearch-2.7.1-java/lib/logstash/outputs/Elasticsearch/http_client.rb:38:in bulk'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:172:in safe_bulk'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-Elasticsearch-2.7.1-java/lib/logstash/outputs/Elasticsearch/common.rb:101:in submit'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-Elasticsearch-2.7.1-java/lib/logstash/outputs/Elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-Elasticsearch-2.7.1-java/lib/logstash/outputs/Elasticsearch/common.rb:28:in multi_receive'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "org/jruby/RubyHash.java:1342:in each'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in output_batch'", "C:/Elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:232:in worker_loop'", "C:/elasticsearch-2.3.1/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:201:in start_workers'"], :level=>:error}[0m

Welcome to our community! :smiley:

Elasticsearch 2.X is very, very old and long past EOL. As such it's no longer supported and you need to upgrade.

You've run out of disk space on the node, so you should check that.

Thank you very much for your reply and welcome message.

From logs it looks like that the disk has less storage but 4.88 GB is free for usage.

The update of Elasticsearch is planned but cannot be done immediately.

Do you know why I get the error in Logstash console? How can it can be resolved?

You need to free some space, Elasticsearch won't allow any writtings until you free up some space in the disk, so when Logstash tries to index some data, it will get an error.

You will need to delete some indices or delete something else in your C:\ drive that will free up some space.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.