Hi all,
I'm here to ask for help about the sizing of a Elasticsearch cluster. I have to index about 5TB of logs a day (from 7am to about 7pm) with 7 days of retention. There are about 150.000 events per second divided in 100.000 machines with dozens of different log types.
Assumptions:
- About 50 concurrent users that use Kibana;
- Computation of scripted fields required (I think with Regex too);
- Total space required: about 70Tb;
Questions are:
- I think that are required about 15 nodes with 16core, 64ram and 5Tb where every node handles about 10.000 logs per second. What do you think?
- With this scenario, which is the best Filebeat/Logstash architecture? Have I to use a queue?
Thanks.
Kind regards.