Hi!
I don't know why I get this error (agents filebeat in kubernetes cluster), if we increased the size of the cluster so that it had more performance and changed the indexes so that instead of 1 shard and 1 replica they had 2 shards and 1 replica so that it was optimal, but now it works worse than before.
This is the error I see in filebeat pod.
2020-07-29T12:37:35.671Z ERROR [elasticsearch] elasticsearch/client.go:223 failed to perform any bulk index operations: 429 Too Many Requests: {"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [2041914692/1.9gb], which is larger than the limit of [2040109465/1.8gb], real usage: [2041909248/1.9gb], new bytes reserved: [5444/5.3kb], usages [request=0/0b, fielddata=180233/176kb, in_flight_requests=5444/5.3kb, accounting=27779116/26.4mb]","bytes_wanted":2041914692,"bytes_limit":2040109465,"durability":"PERMANENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [2041914692/1.9gb], which is larger than the limit of [2040109465/1.8gb], real usage: [2041909248/1.9gb], new bytes reserved: [5444/5.3kb], usages [request=0/0b, fielddata=180233/176kb, in_flight_requests=5444/5.3kb, accounting=27779116/26.4mb]","bytes_wanted":2041914692,"bytes_limit":2040109465,"durability":"PERMANENT"},"status":429}
Any suggestion?
Thank you very much