Crashed with out of memory


(Jianjun Mao) #1

Hi all,

My cluster has 4 machines, every node has 8 CPUS, 32GB memory, and has thousands of indices and shards. Some nodes were crashed with out of memory error during query in recent days. I found the segments memory uses about 14gb of the heap size 16gb, and never be released. If I do more query, nodes will crashed with OOM error, so what can I do to avoid this error ?


(Mark Walkom) #2

Add more memory/nodes or reduce the amount of data you have on the cluster and query.

How many shards and indices in total?


(Jianjun Mao) #3

1000 indices, 7000 shards, every index has 1 segment and is about 6gb


(Mark Walkom) #4

You're wasting a lot of memory on shards, with 4 nodes and indices that big I would only have 1 primary + 1 replica shard.

Are you using doc values?


(Jianjun Mao) #5

do you mean 1 shard per index ?
What's doc values ?


(Mark Walkom) #6

Yes, one per index, with a replica.

https://www.elastic.co/guide/en/elasticsearch/reference/current/doc-values.html


(Jianjun Mao) #7

OK, I'll have a try, thank you.:grinning:


(system) #8