Hardware requirement ELK

If we make the simplified assumption that your index data on disk take up the same amount of space as the raw data and that you will have a replica for high availability we get 1.2TB of indices generated per day. Over 90 days this is over 100 TB of data. Given that I would expect you to need considerably more than 3 Elasticsearch nodes.

If we were to assume that each node can handle e.g. 5TB if data you would need around 20 nodes. Whether this is a high or low estimate will depend on your hardware and performance requirements.

The amount of data a node can handle is usually driven by query latency requirements or heap usage. You can have a look at this webinar for a discussion around this. To optimize and reduce the size you indices take up on disk and ensure you do better than the simplified assumption used above I would recommend looking at this section in the docs.