Are there any limiting factors when picking the amount of memory to give to my elasticsearch nodes?? What I mean is, are there any metrics that would impose a minimum "hard limit" on the amount of memory given to machines in my cluster, which if I went under, I could EXPECT nodes to crash from running out of memory.
For example, if I have indices in the 90gb range with 3 shards each, that means that my shards are 30gb each. Does this have any implications on machine size?? Should I have at least 30gb of memory? Or does that not matter as much?
My goal is to allocate as small of machines as possible, and I'm just trying to figure out what number NOT to go under to avoid expected crashes.