5.4.3
in cluseter 10W+ shards 54 nodes 36 data nodes 500+index
when init the cluster or large data bulk to es cluseter ,
i can't change the settings of cluster
the pending tasks 1000+
queue 10W+
That is over 2800 shards per data node, which is far too much. I suspect you will need to reduce this. Have a look at the blog post I linked to for some guidelines on shard count and size.
It is in my previous post where I asked about the unit.
Increasing queue sizes and timeouts like you have done does not solve the problems you are having in the cluster, just putting a band-aid on it. I would recommend trying to address the underlying issues while your cluster still is in a workable state. The worse state you cluster gets in, the more difficult it will be to solve any problem properly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.