Hi!
I've found out that the recommended number of shards per elasticsearch data node is
20 * GB heap memory. Ex. if I got a node with 16GB java heap, it can handle 20*16=320 shards.
Is this rule of thumb for open indices or closed indices?
We use ES 5.5.2
That is a rule of thumb for the maximum number of shards as we see a lot of clusters that vare massively oversharded. I would recommend having fewer than this if possible. Although closed indices do not use any heap, they do increase the size of the cluster state, so are not completely free either.
Hi @dadoonet and @Christian_Dahlqvist !
Thank you for your answers, we will try to keep the shards as few as possible.
Regarding memory use, if I have 5p and 1r = total 10 open shards per index, I assume that both primary and replica shards will be using memory?
BR Andreas
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.