I know, I keep number of shards per node below 20 per GB heap it has configured.
But, I know Elasticsearch heap memory usage has decreased significantly from v7.7.
If I configured with version 7.7 or later, should I keep number of shards per node below 20 per GB heap it?
Depends, if you have heap available then you can increase it and see how your cluster goes.
How many shards per GB of heap can I have for a typical shard with v7.7 or later?
The majority of work we do around reducing heap is for document based usage. The data required to manage shards is harder to reduce. We have implemented frozen indices and that sort of thing to help though.
Ultimately, you will need to see how that works out on your cluster.
I understand. Thank you.
Searching large number of small shards is often slower than searching a smaller number of larger ones, so it is not just about heap. More shards and indices also increase the size of the cluster state, which can be a factor too, which specially for larger clusters or ones that are dramatically oversharded.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.