Our elasticsearch cluster runs well for the most part, except that the client node would crash because of out-of-memory problem when executing large queries. (I think) It crashes in the gathering phase when all data nodes send partial results back, the datasets get too big to fit in the heap limit.
I searched for an appropriate circuit breaker to cancel the query if the data is potentially too large to fit in memory, especially for client nodes. I tried
indices.breaker.request.limit on all ES nodes, but sounds like it only applies to data nodes. Did I miss anything or it's the expected behavior? If not, is there any built-in solution to solve my problem? Thanks.