{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1740972440/1.6gb], which is larger than the limit of [1491035750/1.3gb]","bytes_wanted":1740972440,"bytes_limit":1491035750}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1740972440/1.6gb], which is larger than the limit of [1491035750/1.3gb]","bytes_wanted":1740972440,"bytes_limit":1491035750},"status":503}
I cannot update my cluster, upgrade or do anything. I can only contact the support by email with a 3 days SLA. Any idea on how to get around this issue ?
If you have an error message like this one, that does not mean that your cluster is down.
Just that some queries can not be served because of the memory pressure.
What gives GET _cat/health?v and GET _cat/indices?v?
May you are using parent/child? May be you are not using doc_values?
I can only contact the support by email with a 3 days SLA.
If you have support, please go for it. That's the best way to handle your case.
Is it a cloud instance?
Sorry, for the posting guidelines, it should be better now/
GET /_cat/indices?v
GET /_cat/nodes?v
GET /_search
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1740972440/1.6gb], which is larger than the limit of [1491035750/1.3gb]","bytes_wanted":1740972440,"bytes_limit":1491035750}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [1740972440/1.6gb], which is larger than the limit of [1491035750/1.3gb]","bytes_wanted":1740972440,"bytes_limit":1491035750},"status":503}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.