Elasticsearch : circuit_breaking_exception Data too large, data for [<http_request>] would be [419575260/400.1mb], which is larger than the limit of [408420352/389.5mb], real usage

We are experiencing an issue while running elastic search bulk API on to elastic cloud 7.8.1
below is the exception when we get when we increase the batch size > 100, The stack trace from NodeJS library @elastic/elasticsearch v 7.10.0
indent preformatted text by 4 spaces
{ "root_cause": [ { "type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [419575260/400.1mb], which is larger than the limit of [408420352/389.5mb], real usage: [407898184/389mb], new bytes reserved: [11677076/11.1mb], usages [request=0/0b, fielddata=29835095/28.4mb, in_flight_requests=11860360/11.3mb, accounting=3170368/3mb]", "bytes_wanted": 419575260, "bytes_limit": 408420352, "durability": "PERMANENT" } ], "type": "circuit_breaking_exception", "reason": "[parent] Data too large, data for [<http_request>] would be [419575260/400.1mb], which is larger than the limit of [408420352/389.5mb], real usage: [407898184/389mb], new bytes reserved: [11677076/11.1mb], usages [request=0/0b, fielddata=29835095/28.4mb, in_flight_requests=11860360/11.3mb, accounting=3170368/3mb]", "bytes_wanted": 419575260, "bytes_limit": 408420352, "durability": "PERMANENT" }
indent preformatted text by 4 spaces
We tried to set indices.breaker.total.use_real_memory : false on the elastic cloud cluster user settings , based on some on some of the discussions in CircuitBreakingException: [parent] Data too large IN ES 7.x

Let us know if there is any whay of fixing this issue or how to set this property on elastic cloud config.

It looks like you have a very small cluster and may simply be overwhelming it and there are as far as I know no magic settings to address this. I would recommend scaling the cluster up to see if that helps.

1 Like

Hi, @Sathish_Shanmugam!
Even if you set indices.breaker.total.use_real_memory : false the amount of data you can do in a only bulk request is 70% of your HEAP (doc here). As you can see, you have a limit of 408420352 bytes (around 389 MB), and you would need around 400 MB (419575260 bytes). So maybe you could increase slightly your HEAP, or you could split your bulk request into more than one.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.