Hi, hope you all doing good, I will like to see if someone can help me with this.
I have a cluster of 35 nodes running version 7.4.2
3 x of them are master nodes
2 x coordinators
8 x hot
2 x cold
20 x Warm
We get daily indices that are on the TB sizes sometimes and sometimes in the ranges above 20GB and will like to see if someone can suggest me:
- How I can calculate the amount of shards that could work with this volumes?
- What factors are taking in consideration to calculate this?, Does the amount of hot nodes matter vs the amount warm nodes or the total amount of data nodes?
- Is there a way to set a limit on the shard size and if it what happens when is exceed it?
- When you do a shrink operation, lets say during a lifecycle rotation when we move the indices from hot to warm, how can I calculate the value to shrink, will this matter if let say I have 50 shards and shrink to 1or will be better not shrink and just move them like one to one mapping of the shards, like if the index has 8 shards then move to warm with 8
Currently Im creating this indices that get a size of 1TB avg with 24 Shards. But Im not sure if thats to little or to much and I got that number but assuming 1 primary shard per hot node + 1 replica per node = 24shards with a primary shard size of 83GB each
Also Not sure if worth to mentioned the amount data disk change between warm and hot
hot has about 12 and warm has like 8 data disk all of them config as JBOD.