We are running ECK in our on-prem Kubernetes Cluster and we have quite a large number of pretty small indexes. We recently hit the limit of 1000 shards per node after which Elasticsearch began silently dropping new data coming in for new indexes.
What's the best way to increase the limit of default limit of 1000 shards per node. Is there an environment variable that can be injected to accomplish this?
Hi @pschlesi Welcome to the community.
See Cluster shard limits
Be careful though too many shards can lead to other performance issues.
It is probably worth reading through this as well
(Dynamic) Limits the total number of primary and replica shards for the cluster. Elasticsearch calculates the limit as follows:
cluster.max_shards_per_node * number of non-frozen data nodes
Shards for closed indices do not count toward this limit. Defaults to
1000. A cluster with no data nodes is unlimited.
Elasticsearch rejects any request that creates more shards than this limit allows. For example, a cluster with a
cluster.max_shards_per_node setting of
100 and three data nodes has a shard limit of 300. If the cluster already contains 296 shards, Elasticsearch rejects any request that adds five or more shards to the cluster.
Notice that frozen shards have their own independent limit.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.