In my cluster i have two Elastic servers and one logstash. what is the solution to avoid these kind of issues and how can i reduce my bulk request count and where to increase the JVM heap size. Below is the error i am getting in logstash
Error: ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [<transport_request>] would be [1012589542/965.6mb] new bytes reserved: [87990/85.9kb]", "bytes_wanted"=>1012589542, "bytes_limit"=>986061209, "durability"=>"PERMANENT"})