Is this sufficcent hardware to handle 300 to 600 GB daily data volume?

Hi Experts,

I have gone through many posts and blogs related to the hardware recommendation . I am planning to use below architecture to handle 300 GB to 600 GB daily data volume .

ES Production cluster

Data Type :- Time series data , and users will open dashboards to get reports on daily, weekly and monthly basis .

  1. 3 Master nodes (2 cpus, 8GB RAM (4 heap) node.master: true false)
  2. 3 Data Nodes (8 CPU, 64GB RAM node.master:false

No of shards Per day :- I am aware max size for 1 shard would be 50 GB , so I am planing to have Primary Shards :- 7 and Replica :- 1 per index per day. Is this good approach ?

I was also wondering to which node Kibana should hit, I know ideally it should be Coordinating node but I do not have this node , so Can I use data not for this purpose only ?

I do not think so I'll need Ingest node , Coordinating node for this data volume ? I am planning to use 1 LS machine to ingest data to one of the data node directly .


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.