Circuit Breaker [parent] Data too large, data for [<http_request>]


We are running Elasticsearch version 5.0 as production.
Using the Ingest Node to parse the logs data.
Logs are being sent by Filebeat Version 5.0
There are more than 25 different logs types being sent, each log with a different structure.

Cluster is composed from:
2 Master + Ingest Nodes
1 Master + Data node
2 Data Nodes

After the cluster is running for some time, both of the Ingest nodes are failing with the Message:
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be larger than limit of [2982071500/2.7gb]","bytes_wanted":2982082632,"bytes_limit":2982071500}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be larger than limit of [2982071500/2.7gb]","bytes_wanted":2982082632,"bytes_limit":2982071500},"status":503}

Only a Restart to each of those two nodes will solve it.
We are encountering it on a Daily basis.

I found the followings:

Which states that there is a Bug which was solved on version 5.2.2

Before going to Upgrade, I would like to know, if there are other options, as:
Our Elasticsearch output config for each Filebeat is with Bulk Size of 10,000
If we will reduce the Bulk Size, can it Solve the issue, is there a correlation between that parameter and the issue ?

Which other or additional config parameters can help to workaround it ?



Anyone ?


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.