CircuitBreakingException: [parent] Data too large IN ES 7.x

Cluster always get CircuitBreakingException after update to ES7.x, especially running recovery tasks or indexing large data: [internal:index/shard/recovery/start_recovery] or [cluster:monitor/nodes/info[n]], then node left the cluster.
here is log and node stats
After I disable indices.breaker.total.use_real_memory the breaking exception seems not apear again.
Is this question related to this issue?

Yes, the linked issue is related. We're looking into the conditions under which the breaker might trip even though the node could theoretically handle the extra load. This seems to be mostly related to the workload. In your case, best disable the real memory breaker.

It happens again even disable real memory breaker: [parent] Data too large, data for [<http_request>].
Looks like real memory breaker isn't root reason

can you provide the full message? It will tell you information about the different child breakers, which allows to explain where memory is used.

ElasticsearchStatusException[Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [<http_request>] would be [30799676956/28.6gb], which is larger than the limit of [30601641984/28.5gb], real usage: [30760015112/28.6gb], new bytes reserved: [39661844/37.8mb]]]
    at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
    at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2053)
    at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2030)
    at org.elasticsearch.client.RestHighLevelClient$1.onFailure(RestHighLevelClient.java:1947)
    at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onDefinitiveFailure(RestClient.java:857)
    at org.elasticsearch.client.RestClient$1.completed(RestClient.java:560)
    at org.elasticsearch.client.RestClient$1.completed(RestClient.java:537)
    at shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
    at shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
    at shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:412)
    at shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:305)
    at shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:267)
    at shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
    at shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
    at shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:116)
    at shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:164)
    at shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:339)
    at shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:317)
    at shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:278)
    at shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    at shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590)
    at java.lang.Thread.run(Thread.java:748)
    Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://node:9200], URI [/_bulk?timeout=3m], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [30799676956/28.6gb], which is larger than the limit of [30601641984/28.5gb], real usage: [30760015112/28.6gb], new bytes reserved: [39661844/37.8mb]","bytes_wanted":30799676956,"bytes_limit":30601641984,"durability":"TRANSIENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [<http_request>] would be [30799676956/28.6gb], which is larger than the limit of [30601641984/28.5gb], real usage: [30760015112/28.6gb], new bytes reserved: [39661844/37.8mb]","bytes_wanted":30799676956,"bytes_limit":30601641984,"durability":"TRANSIENT"},"status":429}
        at org.elasticsearch.client.RestClient$1.completed(RestClient.java:552)
        ... 16 more

here is node stats: https://del.dog/ibaruginif

The error shows that you're still using the real memory circuit breaker (see real usage: [30760015112/28.6gb) whereas you claim you're not?

I confirm I have disabled real memory circuit breaker:

GET problem_node:9200/_cluster/settings?include_defaults&flat_settings&local&filter_path=defaults.indices*
{
"defaults": {
"indices.analysis.hunspell.dictionary.ignore_case": "false",
"indices.analysis.hunspell.dictionary.lazy": "false",
"indices.breaker.accounting.limit": "100%",
"indices.breaker.accounting.overhead": "1.0",
"indices.breaker.fielddata.limit": "40%",
"indices.breaker.fielddata.overhead": "1.03",
"indices.breaker.fielddata.type": "memory",
"indices.breaker.request.limit": "60%",
"indices.breaker.request.overhead": "1.0",
"indices.breaker.request.type": "memory",
"indices.breaker.total.limit": "70%",
"indices.breaker.total.use_real_memory": "false",
"indices.breaker.type": "hierarchy",
"indices.cache.cleanup_interval": "1m",
"indices.fielddata.cache.size": "-1b",
"indices.lifecycle.poll_interval": "10m",
"indices.mapping.dynamic_timeout": "30s",
"indices.memory.index_buffer_size": "20%",
"indices.memory.interval": "5s",
"indices.memory.max_index_buffer_size": "6g",
"indices.memory.min_index_buffer_size": "48mb",
"indices.memory.shard_inactive_time": "5m",
"indices.queries.cache.all_segments": "false",
"indices.queries.cache.count": "10000",
"indices.queries.cache.size": "10%",
"indices.query.bool.max_clause_count": "1024",
"indices.query.query_string.allowLeadingWildcard": "true",
"indices.query.query_string.analyze_wildcard": "false",
"indices.recovery.internal_action_long_timeout": "1800000ms",
"indices.recovery.internal_action_timeout": "15m",
"indices.recovery.max_bytes_per_sec": "1024m",
"indices.recovery.max_concurrent_file_chunks": "2",
"indices.recovery.recovery_activity_timeout": "1800000ms",
"indices.recovery.retry_delay_network": "5s",
"indices.recovery.retry_delay_state_sync": "500ms",
"indices.requests.cache.expire": "0ms",
"indices.requests.cache.size": "1%",
"indices.store.delete.shard.timeout": "30s"
}
}

How did you disable the real memory circuit breaker? Did you put indices.breaker.total.use_real_memory : false into elasticsearch.yml of all the nodes and restart?

Also, why are you showing the defaults in the settings API call? The default for indices.breaker.total.use_real_memory should be true. The setting needs to be explicitly disabled.

I have disabled the real memory circuit breaker in all data node except master only node(because of master node will not get this exception).
the indices.breaker.total.use_real_memory default value shows false because this setting set in elasticsearch.yml.