Can this value be resized? When the shard of a single node in our cluster reaches 1000, this node cannot write
Having large number of small shards in a cluster is generally inefficient and can cause problems with stability and performance as the cluster state grows. The 1000 limit is there to protect users from oversharding and can be overridden (although I would not recommend this). I have seem numerous users with oversharded clusters having severe problems and the high shard count preventing them from fixing the issues.
Increasing the limit is the easy way but just moves the problem to the future. At that point it may be much more difficult to fix the underlying issue so I would recommend you look at how you index data and try to reduce the shard count instead.
Thanks for your quick reply
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.