I am currently using ES 6.6.1 (3 nodes), all running on centos 7.
I am planning an upgrade to 6.8 and then 7.1
The upgrade assistent says this:
Number of open shards exceeds cluster soft limit
There are [3514] open shards in this cluster, but the cluster is limited to [1000] per data node, for [3000] maximum.
The limit of 1000 shards per node is set quite high in my opinion, so as you are you are just about exceeding this you are oversharded, but not massively so. I would recommend trying to reduce the number of shards in the cluster.
What is your use case? If you are using time based indices you can try to reduce the number of primary shards for new indices or switch from daily to weekly or monthly indices if volumes are low and retention periods long.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.