I got issue about "Data too large"

Hello, I installed ELK version 7.1.0.

After I ran all modules, I got messages:

At Logstash:
[2019-06-11T03:43:27,582][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [<transport_request>] would be [1030230716/982.5mb], which is larger than the limit of [986061209/940.3mb], real usage: [1029991544/982.2mb], new bytes reserved: [239172/233.5kb]", "bytes_wanted"=>1030230716, "bytes_limit"=>986061209, "durability"=>"TRANSIENT"}) [INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>382}

[2019-06-11T03:57:49,643][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://logstash:xxxxxx@elastic:9200/][Manticore::SocketTimeout] Read timed out {:url=>https://logstash:xxxxxx@elastic:9200/, :error_message=>"Elasticsearch Unreachable: [https://logstash:xxxxxx@elastic:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2019-06-11T03:57:49,644][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [https://logstash:xxxxxx@elastic:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}

At Elasticsearch:
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1018342960/971.1mb], which is larger than the limit of [986061209/940.3mb], real usage: [1018342960/971.1mb], new bytes reserved: [0/0b]","bytes_wanted":1018342960,"bytes_limit":986061209,"durability":"TRANSIENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1018342960/971.1mb], which is larger than the limit of [986061209/940.3mb], real usage: [1018342960/971.1mb], new bytes reserved: [0/0b]","bytes_wanted":1018342960,"bytes_limit":986061209,"durability":"TRANSIENT"},"status":429}

At Kibana:
[circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1004465832/957.9mb], which is larger than the limit of [986061209/940.3mb], real usage: [1004465832/957.9mb], new bytes reserved: [0/0b], with { bytes_wanted=1004465832 & bytes_limit=986061209 & durability="TRANSIENT" }

I don't understand "circuit_breaking_exception".
Please help me.
Thanks.

have a look once at jvm.properties file and try to increase the size over there to
-Xms4g
-Xmx4g

2 Likes

Either increase your heap size or try and reduce the size of your bulk requests.

2 Likes

Yeap, thanks @pathri and @Christian_Dahlqvist.
I forgot increase heap size.
I stop half of number logstash and checking logs.

Increase heap size is way to resolve this problem.

Thank you.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.