I am using version 5.5.2 of Elasticsearch( having 3 Nodes= 64 gb each ) with month wise index creation.
Daily data insertion is 17 GB i.e 17 * 30 = 510 GB/month.
Right now i am having 6 shards(85gb/shard),1 replica in an index and already there is around 400 gb of data in index but i am not getting any problem.
Should i make next month index with 9 shards or should keep the same ?
I will have no problem if insertion or search time increases slightly. But cannot afford node or shard failure.
Thanks.
Actually it has dependency across teams and upgrading is not an option for now.
Else ILM would have been sufficient for handling these use cases.
Can you guide what is the best thing i can do with same version.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.