Aim to have shards with an average size of tens of GB. If you only have 50GB storage per node you need single digit shards per node. There is no need to have 5 replicas per index and fewer indices would be better.
You should not override this. It seems you already have though so I would recommend fixing your sharping issue and then set it back to the default value. Further increasing this will most likely just let you get to a situation where you no longer can fix your cluster as cluster updates time out and lose potentially lose all your data as you have to start over from scratch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.