Using 1.6.0
We use daily index pattern so far 11 indexes/days
We have about 34 million docs per day, relatively small about 3K bytes each doc.
Indexes have 4 shards + 1 replica.
Each index is about 8GB size on disk (includes replica)
Each shards reports about 6mb memory usage in the segments section.
So is the following right?
11 indexes x 8 shards(per index) x 6mb = 528mb / 4 nodes = 132mb per node + what ever filter cache, field data etc...
So for 365 days we would need, about 5GB per node + what ever for operational.
Then depending on usage we can close older indexes to free memory. And maybe add a couple more boxes and move some indexes there type of thing...