Hi,
We are using the CCR feature as part of ours process to migrate our data between two cluster, but we are experimenting some unexpected behaviour in the remote cluster.
Once begin to follow an index in the remote cluster report this warning:
{"type": "server", "timestamp": "2021-12-23T10:27:41,495Z", "level": "WARN", "component": "o.e.i.b.request", "cluster.name": "remote-cluster", "node.name": "node-1", "message": "[request] New used memory 7689662394 [7.1gb] for data of [<reduce_aggs>] would be larger than configured breaker: 6442450944 [6gb], breaking", "cluster.uuid": "6s-6ymZ5SZqzc8KdmOqn9H", "node.id": "i-5Ilv6VJOSg6zWrm0IjVT" }
{"type": "server", "timestamp": "2021-12-23T10:28:41,564Z", "level": "WARN", "component": "o.e.i.b.request", "cluster.name": "remote-cluster", "node.name": "node-1", "message": "[request] New used memory 9933680952 [9.2gb] for data of [preallocate[aggregations]] would be larger than configured breaker: 6442450944 [6gb], breaking", "cluster.uuid": "6s-6ymZ5SZqzc8KdmOqn9H", "node.id": "i-5Ilv6VJOSg6zWrm0IjVT" }
{"type": "server", "timestamp": "2021-12-23T10:29:41,603Z", "level": "WARN", "component": "o.e.i.b.request", "cluster.name": "remote-cluster", "node.name": "node-1", "message": "[request] New used memory 12233179472 [11.3gb] for data of [preallocate[aggregations]] would be larger than configured breaker: 6442450944 [6gb], breaking", "cluster.uuid": "6s-6ymZ5SZqzc8KdmOqn9H", "node.id": "i-5Ilv6VJOSg6zWrm0IjVT" }
{"type": "server", "timestamp": "2021-12-23T10:30:41,615Z", "level": "WARN", "component": "o.e.i.b.request", "cluster.name": "remote-cluster", "node.name": "node-1", "message": "[request] New used memory 14468545192 [13.4gb] for data of [preallocate[aggregations]] would be larger than configured breaker: 6442450944 [6gb], breaking", "cluster.uuid": "6s-6ymZ5SZqzc8KdmOqn9H", "node.id": "i-5Ilv6VJOSg6zWrm0IjVT" }
And Kibana report the next :
My first conclusions was try to modified some parameter in the follow request to reduce the impact, but after some test the result was the same the error persist. I also try to set parameters like max_read_request_operation_count, max_read_request_size, max_outstanding_read_requests to minimum values to try to make a visible impact but nothing happen (i tried also with write properties) follow api
Looks like all the settings was ignored..
Ours goal is run the CCR in background with a minimum impact in the remote cluster , but this issue is blocking us totally.
Some extra info:
GET _nodes/stats/jvm?pretty&human
"jvm" : {
"mem" : {
"heap_used" : "2.7gb",
"heap_used_percent" : 34,
"heap_committed" : "8gb",
"heap_max" : "8gb"
....
}
}
GET _nodes/stats/breaker
"request" : {
"limit_size_in_bytes" : 6442450944,
"limit_size" : "6gb",
"estimated_size_in_bytes" : 42801814968,
"estimated_size" : "39.8gb",
"overhead" : 1.0,
"tripped" : 47
},...
I hope someone can bring us a suggestion about how resolve this. Let me know
if you need more info.
Kind Regards .