Error while inserting bulk data in big size via logstash to elasticsearch

Hi All

I am using elasticsearch version 5.4.1 and I have 8 Million records combined all the indices. Now I have one index which contains 21 lakhs records and I want to insert more records in that Index
First bulk request containing 1 Lakh records I am getting:

[2017-10-24T12:29:47,946][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 (
{"type"=>"unavailable_shards_exception", "reason"=>"[jdbc_metadata_pt_new_logs][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[jdbc_metadata_pt_new_logs][2]] containing [27] requests]"}
)
[2017-10-24T12:29:47,946][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 (
{"type"=>"unavailable_shards_exception", "reason"=>"[jdbc_metadata_pt_new_logs][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[jdbc_metadata_pt_new_logs][2]] containing [27] requests]"}
)
...
[2017-10-24T12:29:47,946][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 (
{"type"=>"unavailable_shards_exception", "reason"=>"[jdbc_metadata_pt_new_logs][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[jdbc_metadata_pt_new_logs][2]] containing [27] requests]"}
)
[2017-10-24T12:29:47,946][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 (
{"type"=>"unavailable_shards_exception", "reason"=>"[jdbc_metadata_pt_new_logs][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[jdbc_metadata_pt_new_logs][2]] containing [27] requests]"}
)
[2017-10-24T12:29:47,946][ERROR][logstash.outputs.elasticsearch] Retrying individual actions

[2017-10-24T13:08:30,172][ERROR][logstash.outputs.elasticsearch] Action
...

[2017-10-24T13:08:30,173][ERROR][logstash.outputs.elasticsearch] Action
[2017-10-24T13:09:29,929][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://url/][Manticore::SocketTimeout] Read timed out {:url=>http://url/, :error_message=>"Elasticsearch Unreachable: [http://url/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2017-10-24T13:09:29,930][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://url/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2017-10-24T13:09:29,930][DEBUG][logstash.outputs.elasticsearch] Failed actions for last bad bulk request! {:actions=>[["index", {:_id=>"111847", :_index=>"jdbc_metadata_pt_new_logs", :_type=>"jdbc_metadata_pt_new_logs", :_routing=>nil}, 2017-10-24T07:27:10.090Z %{host} %{message}]
...
["index", {:_id=>"1570686", :_index=>"jdbc_metadata_pt_new_logs", :_type=>"jdbc_metadata_pt_new_logs", :_routing=>nil}, 2017-10-24T07:27:10.222Z %{host} %{message}]]}

[2017-10-24T13:09:29,981][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
[2017-10-24T13:09:29,981][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
[2017-10-24T13:09:30,439][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
[2017-10-24T13:09:30,440][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
[2017-10-24T13:09:30,460][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck

...

Caused by: java.io.IOException: No space left on device**
at java.io.FileOutputStream.writeBytes(Native Method)**
at java.io.FileOutputStream.write(FileOutputStream.java:326)**
at org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:211)

Then I tried bulk request with 100 records Then I am below exception:

[2017-10-24T12:37:43,498][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>3, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/response.rb:50:in call'"}]}} [2017-10-24T12:37:48,497][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>3, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/response.rb:50:incall'"}]}}

...

[2017-10-24T12:38:38,498][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>3, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/manticore-0.6.1-java/lib/manticore/response.rb:50:in `call'"}]}}
[2017-10-24T12:38:43,240][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[jdbc_metadata_pt_new_logs][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[jdbc_metadata_pt_new_logs][2]] containing [index {[jdbc_metadata_pt_new_logs][jdbc_metadata_pt_new_logs][48727], source[n/a, actual length: [4.8kb], max length: 2kb]}]]"})
[2017-10-24T12:38:43,241][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-10-24T12:38:43,241][ERROR][logstash.outputs.elasticsearch] Action

[2017-10-24T12:38:48,497][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>3, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]>worker3", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/interval.rb:84:in `sleep'"}]}}

Please tell why I am getting this error

Caused by: java.io.IOException: No space left on device

You're out of disk space. I'm not sure if this is in Logstash or ES, but either way the ES cluster's health is bad. Get ES in shape first.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.