CircuitBreaker: [parent] Data too large, data for [<transport_request>]

Hi, We are having a cluster running in Elastic search version 7.1.1 , details below.

1.) 4 node cluster where 3 nodes act as Data node and also is master-eligible and one node configured as coordinating only node.
2.) 16 GB of RAM, 10 GB allocated as Heap Memory.
3.) 3437652 docs & 89 GB of Storage Space used.
4:) It has 4 primary shards with 1 replica.

While indexing we are receiving the below error, i think this error happens when data is replicated across nodes using transport request but not sure why this is happening , as i am NOT seeing heap memory reaching 95% (which is default value for Parent circuit breaker as per ES docs - https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html ) in this case. Can somebody help us on this to understand more & fix this issue ? Thanks and appreciate your help.

Error:
org.elasticsearch.transport.RemoteTransportException: [node-2][172.16.3.187:9300][indices:data/write/bulk[s][r]]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [10212897876/9.5gb], which is larger than the limit of [10200547328/9.5gb], real usage: [10210884608/9.5gb], new bytes reserved: [2013268/1.9mb]

Can somebody help on this.?

Today we again seeing this error, and logstash marking ES URL as down (but we are able to curl ES and get the output) and we had to restart elasticsearch & logstash to make it again working.

[2019-08-07T20:02:53,284][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>"641322AK8", :_index=>"indexname", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x1daa4747], :response=>{"index"=>{"_index"=>"indexname", "_type"=>"_doc", "_id"=>"641322AK8", "status"=>404, "error"=>{"type"=>"shard_not_found_exception", "reason"=>"no such shard", "index_uuid"=>"_B8mj2qrR-WFWI2fZpmRcg", "shard"=>"2", "index"=>"indexname"}}}}
[2019-08-07T20:02:53,303][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [<transport_request>] would be [10438350010/9.7gb], which is larger than the limit of [10200547328/9.5gb], real usage: [10438281552/9.7gb], new bytes reserved: [68458/66.8kb]", "bytes_wanted"=>10438350010, "bytes_limit"=>10200547328, "durability"=>"PERMANENT"})
[2019-08-07T20:02:53,312][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-08-07T20:02:53,543][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>"074863FR7", :_index=>"indexname", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x78d629c6], :response=>{"index"=>{"_index"=>"indexname", "_type"=>"_doc", "_id"=>"074863FR7", "status"=>404, "error"=>{"type"=>"shard_not_found_exception", "reason"=>"no such shard", "index_uuid"=>"_B8mj2qrR-WFWI2fZpmRcg", "shard"=>"2", "index"=>"indexname"}}}}
[2019-08-07T20:04:21,282][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@xxx.xx.x.xxx:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://elastic:xxxxxx@xxx.xx.x.xxx:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@xxx.xx.x.xxx:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}

Any pointers & help on this is appreciated. thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.