Optimal size of shard: Is 80GB Per shard ok for this use case

I am using version 5.5.2 of Elasticsearch( having 3 Nodes= 64 gb each ) with month wise index creation.
Daily data insertion is 17 GB i.e 17 * 30 = 510 GB/month.

Right now i am having 6 shards(85gb/shard),1 replica in an index and already there is around 400 gb of data in index but i am not getting any problem.

Should i make next month index with 9 shards or should keep the same ?
I will have no problem if insertion or search time increases slightly. But cannot afford node or shard failure.

Welcome to our community! :smiley:

This is positively ancient and you need yo upgrade with a matter of urgency.

You are running a version with known issues regarding the handling of large amounts of shards.

Upgrading will be the best thing you can do here, that and using ILM.

Thanks. :slight_smile:
Actually it has dependency across teams and upgrading is not an option for now.
Else ILM would have been sufficient for handling these use cases.

Can you guide what is the best thing i can do with same version.

The best thing is to manage it manually, there's no other option sorry.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.