I am getting break circuit exception in logstash logs,

In my cluster i have two Elastic servers and one logstash. what is the solution to avoid these kind of issues and how can i reduce my bulk request count and where to increase the JVM heap size. Below is the error i am getting in logstash

Error: ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [<transport_request>] would be [1012589542/965.6mb] new bytes reserved: [87990/85.9kb]", "bytes_wanted"=>1012589542, "bytes_limit"=>986061209, "durability"=>"PERMANENT"})

Hi Satyam, Please check once the jvm.options in elasticsearch in the cluster.
Increase the heap size as per availaability in your infra and then restart all elasticsearch nodes. Try to align all jvm.options in a the cluster

Thank you Piyush for your information

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.