[parent] data too large

Hello , I am seeking the frequent occurence of the CircuitBreakingException in our ES cluster.
org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[mdwdata04][10.10.30.66:9302][indices:data/write/bulk[s]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [14685460692/13.6gb], which is larger than the limit of [14663286784/13.6gb], real usage: [14685456088/13.6gb], new bytes reserved: [4604/4.4kb], usages [request=0/0b, fielddata=48467/47.3kb, in_flight_requests=4604/4.4kb, accounting=98557832/93.9mb]];
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [14685460692/13.6gb], which is larger than the limit of [14663286784/13.6gb], real usage: [14685456088/13.6gb], new bytes reserved: [4604/4.4kb], usages [request=0/0b, fielddata=48467/47.3kb, in_flight_requests=4604/4.4kb, accounting=98557832/93.9mb

The ES version is 7.6.1
ES cluster has three physical servers with 40 cores 126GB ram , each servers has one master node(with 4gb jvm) and one coordinate node(with 8gb jvm) and three data nodes (with 15gb jvm).
My situation is writing logs into es with bulk request, and query with kibana. The query operations just use kibana's query with timestamp sort and aggregations, just like:
{
"version": true,
"size": 500,
"sort": [
{
"@timestamp": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"_source": {
"excludes":
},
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "30s",
"time_zone": "Asia/Shanghai",
"min_doc_count": 1
}
}
},
"stored_fields": [
""
],
"script_fields": {},
"docvalue_fields": [
{
"field": "@timestamp",
"format": "date_time"
}
],
"query": {
"bool": {
"must": [],
"filter": [
{
"match_all": {}
},
{
"range": {
"@timestamp": {
"format": "strict_date_optional_time",
"gte": "2020-05-21T02:01:25.317Z",
"lte": "2020-05-21T02:16:25.317Z"
}
}
}
],
"should": [],
"must_not": []
}
},
"highlight": {
"pre_tags": [
"@kibana-highlighted-field@"
],
"post_tags": [
"@/kibana-highlighted-field@"
],
"fields": {
"
": {}
},
"fragment_size": 2147483647
}
}

I use GET /_nodes/stats/breaker to observe each node and got the result like this:
"parent" : {
"limit_size_in_bytes" : 14663286784,
"limit_size" : "13.6gb",
"estimated_size_in_bytes" : 11615692168,
"estimated_size" : "10.8gb",
"overhead" : 1.0,
"tripped" : 0
}

So I want to know the root cause why the estimated_size of parent brokers will rise and cause broker ,and how to avoid it.
Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.