In my case, I need to handle 3TB of data in elaticsearch, We have a job to rotate the index daily once. Each day a new index will be created, which is of size 100 GB. For 30 days we have 30 indices( 30 * 100GB =3000GB). we need to create monthly reports on this using aggregation queries(mostly terms aggs). But running aggregation queries on 3 TB of data, leads to client node crash. Could anyone help me with this?
3 master nodes: 500 vcpu and 1GB physical memory
5 data nodes: 1vcpu and 1GB physical memory
1 client node: 500vcpu and 1GB physical memory
what could be the best solution for this?