My cluster has 4 machines, every node has 8 CPUS, 32GB memory, and has thousands of indices and shards. Some nodes were crashed with out of memory error during query in recent days. I found the segments memory uses about 14gb of the heap size 16gb, and never be released. If I do more query, nodes will crashed with OOM error, so what can I do to avoid this error ?
Add more memory/nodes or reduce the amount of data you have on the cluster and query.
How many shards and indices in total?
1000 indices, 7000 shards, every index has 1 segment and is about 6gb
You're wasting a lot of memory on shards, with 4 nodes and indices that big I would only have 1 primary + 1 replica shard.
Are you using doc values?
do you mean 1 shard per index ?
What's doc values ?
OK, I'll have a try, thank you.