How should I configure ELK stack for saving logs everyday?

I have a requirement which needs to have log indices for a period of 30 days, but the problem here is each index could grow to a potential size of more than 9 GB i.e. more than 10 million record per day.

Previously we deployed this setup within a single node cluster and after a significant amount of execution, the kibana dashboard was not able to query the specified index and was throwing timeout errors.

Since I am a noob, I dont have enough expertise about what kind of configuration we can use to resolve this issue.

Whether I should a single node cluster with more size ( I know it is prone to single point failure, but its an option), or opt for a multinode cluster? How to decide the multinode number? What should be the setting for the individual log index, i.e. whether it should have 5 shards or more? Is there any standard architecture for doing this?

Can someone help?

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.