Cluster hit the circuit breaker, so we increased the limit temporarily.
But we restart one of the node and that node is stuck at 500 shards and unable to move further into initialize other shards. There are multiple indices and multiple UNASSIGNED shards.
Other node has 1650 shards and that node is health.
I am using 6.4
Could you please let us know when this issue happening and advise for resolution?
There has been a few times where after a node runs out of storage, some shards don't want to re-initialize after more storage has been added. In that situation closing the index, and then re-opening it fixed it for us. This was on earlier versions of ES though.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.