Logstash circuit breaking

Hi,

Do anyone know how to solve this circuit breaking exception in logstash (7.10).

[2022-09-23T14:38:22,920][INFO ][logstash.outputs.elasticsearch][main][299ec4f1e5994d0fe7b59d4e4d29f50e734f0d6401d909dc198ecbc402ca3983] retrying failed action with response code: 429 ({"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [indices:data/write/bulk[s]] would be [30050216410/27.9gb], which is larger than the limit of [29581587251/27.5gb], real usage: [30050158840/27.9gb], new bytes reserved: [57570/56.2kb], usages [request=0/0b, fielddata=2602183/2.4mb, in_flight_requests=57570/56.2kb, model_inference=0/0b, accounting=1486405036/1.3gb]", "bytes_wanted"=>30050216410, "bytes_limit"=>29581587251, "durability"=>"PERMANENT"})
[2022-09-23T14:38:22,920][INFO ][logstash.outputs.elasticsearch][main][299ec4f1e5994d0fe7b59d4e4d29f50e734f0d6401d909dc198ecbc402ca3983] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>14}
^C

I have tried multiple options like

  1. To increase the jvm to 16g under /etc/logstash/jvm.options but still the same issue.
  2. Restart logstash and Elastic nodes

Is there a way where I can discard this 27.9gb data or any other better way to resolve this issue.

Thank you!

That looks like circuit breaking on the elasticsearch side.

You're going to need to take a look, but it looks to me that your elasticsearch cluster It's very high JVM usage which could mean you have very high number of indices or shards or any number of reasons.

But pretty sure that error comes from the elasticsearch side.

I too agree, this error will be coming from elastic, but JVM is 65% utilized in elastic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.