Total shards per node

Hello all,

there seems to be an overall consensus as to the number of shards running per node in an Elasticsearch cluster. Different posts (also in this forum) recommend to keep this number small, maybe under a thousand shards per node.

I would be interested to know if a large number of shards is de-facto a problem for a node.
And if it is why does the configuration setting cluster.routing.allocation.total_shards_per_node default to unbounded?

Thanks,
Michail

Different posts (also in this forum) recommend to keep this number small, maybe under a thousand shards per node.

A thousand shards sounds like a lot to me. I'd try to limit it a few hundred.

I would be interested to know if a large number of shards is de-facto a problem for a node.

Yes, because there's a fixed memory overhead for each shard.

And if it is why does the configuration setting cluster.routing.allocation.total_shards_per_node default to unbounded?

Because there's no obvious optimal value to use as the default?

Is there a way to estimate this overhead if it is fixed?