We are running ECK in our on-prem Kubernetes Cluster and we have quite a large number of pretty small indexes. We recently hit the limit of 1000 shards per node after which Elasticsearch began silently dropping new data coming in for new indexes.
What's the best way to increase the limit of default limit of 1000 shards per node. Is there an environment variable that can be injected to accomplish this?
(Dynamic) Limits the total number of primary and replica shards for the cluster. Elasticsearch calculates the limit as follows:
cluster.max_shards_per_node * number of non-frozen data nodes
Shards for closed indices do not count toward this limit. Defaults to 1000. A cluster with no data nodes is unlimited.
Elasticsearch rejects any request that creates more shards than this limit allows. For example, a cluster with a cluster.max_shards_per_node setting of 100 and three data nodes has a shard limit of 300. If the cluster already contains 296 shards, Elasticsearch rejects any request that adds five or more shards to the cluster.
Notice that frozen shards have their own independent limit.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.