Balancing cluster by segments memory usage

Hi,

I have cluster running on 8 nodes with many indices. Each node holds 1300-1400 segments but they differ v.much in terms of segments memory usage. Minimum is 12,9GB - max 17.9GB. Average memory usage is ~15GB per node, so my question is whether there's a way to better spread indices among nodes, to have each node closer to average value. Now I have situation that nodes having high segment memory usage suffer from frequent and long GC operations. Elasticsearch 6.5.

Regards -
Chris Ksiezyk

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.