TLDR: what specs are appropriate for client, data, and master nodes when ingesting 250GB/day?
I've been tasked with building out an ELK Stack as my company would like to move away from Splunk. We've already began using the ELK Stack template that AWS provides but we would like more control over configuration. With that being said, I've been reading a lot of documents and I think I have a good idea of the specs for each node but still wanted to reach out to the community in case someone had a more definitive answer. This clustered ELK environment would be ingesting around 250GB/day and possibly growing in the near future. This is what I was thinking:
10 total nodes
2 client nodes
3 data nodes
3 master nodes
From my reading I have leaned that the master node doesn't require that much in RAM and HD so I figure maybe 8GB in RAM and 50 in HD?
From my reading I have learned that the data nodes work best at 64GB but still works well at 32GB. I was thinking maybe 2TB for each data node. At least 1yr retention on indexes.
I am not sure at all what the specs should be for the client nodes?