Hello everyone,
Configuration : Elastic Cloud - ES 8.11
I've been benchmarking Elastic for the past days and had multiple errors coming up and I couldn't find a viable answer on the most annoying one :
{ _index: 'MY_INDEX',
_id: 'aicE5Y0B8q_R8Ba0rtrK',
status: 429,
error:
{ type: 'circuit_breaking_exception',
reason:
'[parent] Data too large, data for [indices:data/write/bulk[s]] would be [1924248266/1.7gb], which is larger than the limit of [1717986918/1.5gb], real usage: [1913359656/1.7gb], new bytes reserved: [10888610/10.3mb], usages [request=0/0b, inflight_requests=10888610/10.3mb, model_inference=0/0b, eql_sequence=0/0b, fielddata=197/197b]',
bytes_wanted: 1924248266,
bytes_limit: 1717986918,
durability: 'TRANSIENT' } }
As you can see, I'm trying to insert 1924248266/1.7gb data while my Elastic instance is limited to compute 1717986918/1.5gb.
When I was looking the performance of my instance, CPU and RAM were pretty ok.
Anyways, to fix this, you need to check your nodes circuit breaker stats like this :
GET /_nodes/stats/breaker
Here, you will have plenty of useful information per nodes but the most important is located on the "breaker" property.
If I look at the parent property, here's what I can see :
"parent": {
"limit_size_in_bytes": 1717986918, # Limit in bytes
"limit_size": "1.5gb", # Limit in size
"estimated_size_in_bytes": 1135798704, # Actual memory in bytes
"estimated_size": "1gb", # Actual memory in size
"overhead": 1, # A constant that all estimates for the circuit breaker are multiplied with to calculate a final estimate.
"tripped": 0 # How many time this got triggered (reset on reboot)
}
Let's say that you want to increase this to 2.2gb, you need ton run this command :
PUT /_cluster/settings
{
"persistent": {
"indices.breaker.total.limit": "2.2gb"
}
}
This way, I'm increasing my circuit breaker total limit and thus making my import finally working ! I hope that this can help you