after delete half data, reduce both shards count and ducuments size. it becomes health.
I am querying what is the root cause? is there any limit for shards count or index size on a node.
what metadata maintained in memory, cause occupy huge memory when cluster goes huge.
Please don't post images of text as they are hardly readable and not searchable.
Instead paste the text and format it with </> icon. Check the preview window.
Too many shards per node seems to me one of the good reasons. Then it can be caused by fielddata as well. Depends on your mapping and if you are using doc values or not.
Also, you should upgrade.
But the first thing to do is to reduce the number of shards per node.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.