Can some one let me know how many shards each node can hold to prevent the under/over utilization of ES clusters having 3 Dedicated Data Nodes, 1 Dedicated Master Node & 1 Dedicated Client Node. The cluster is expected to hold logstash indices from 5 different services. An index is created for each day & holds one week worth of data. In short the cluster is expected to hold about 35 open indices at a time. Average size of each index will be about 500 MB. As per the doc it seems 500 MB is not a big size so it should not be good to go with default 5 shards per index.
I know that there is no magic formula for this & it depends on the nature of document, index & search query. What I am trying to ask is to get started do people follow any heuristics to determine the no. of shards per index given the no. of indices & average size of each index?