Number of open shards exceeds cluster soft limit

I am currently using ES 6.6.1 (3 nodes), all running on centos 7.
I am planning an upgrade to 6.8 and then 7.1

The upgrade assistent says this:
Number of open shards exceeds cluster soft limit
There are [3514] open shards in this cluster, but the cluster is limited to [1000] per data node, for [3000] maximum.

The upgrade assistant links to this documentation which gives me 404 error upon opening it:
https://www.elastic.co/guide/en/elasticsearch/reference/master/breaking-changes-7.0.html#_cluster_wide_shard_soft_limit

How should I fix those on centos?
Does this related to max open file in linux?
Thanks

Take a look at https://www.elastic.co/guide/en/elasticsearch/reference/7.1/misc-cluster.html#cluster-shard-limit, it should help.

The limit of 1000 shards per node is set quite high in my opinion, so as you are you are just about exceeding this you are oversharded, but not massively so. I would recommend trying to reduce the number of shards in the cluster.

How do I reduce the number of shards? Adding more nodes?

What is your use case? If you are using time based indices you can try to reduce the number of primary shards for new indices or switch from daily to weekly or monthly indices if volumes are low and retention periods long.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.