Replica shards are not allocated, so do not consume heap. You however have far too many shards so I would recommend you look to reduce this dramatically as it impacts performance and potentially also stability.
wow..thank you so much for quick reply.
Kindly advice whether doing the following will improve performance.
1 .making 3 node cluster
2. using shrink API reducing number of shards
Incase of 3 node cluster without shards reduction will there will be overhead on one node ??
How much space does your indices take up on disk? Shrinking the indices will help, but if they are very small it may be better to reindex into e.g. consolidated monthly indices in order to reduce the shard count further. Adding nodes will help but may not be necessary unless you want to improve resiliency and achieve high availability.
And to add already daily index have been consolidated to monthly.As max shards/per node can be 20/jvm heap,my jvm heap is 8GB ,thus 8*20 = 160 shards.but there are 1500+ shards .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.