there seems to be an overall consensus as to the number of shards running per node in an Elasticsearch cluster. Different posts (also in this forum) recommend to keep this number small, maybe under a thousand shards per node.
I would be interested to know if a large number of shards is de-facto a problem for a node.
And if it is why does the configuration setting cluster.routing.allocation.total_shards_per_node default to unbounded?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.