Hello,
So, I have 5 nodes available of RHEL, I have to setup ELK as cluster on these nodes,
I did this before with 2 masters, 3 data nodes, with each master will have logstash and kibana. What happened was the cluster failed once in a day only on one specific master node, and because I set minimum master to 2 the logs were not written as the master used to go down.
Later,
I changed the architecture to 1 master 1 client node and 3 data nodes but the same happened again and the cluster went down.
Background about the amount of data I was receiving was once I start the cluster first time it would gather nearly 150-200 gb of indices (no of replica was 1) but as days passed by data would fluctuate and generated index of around 3-6gb per day, But as the cluster went down and I restart it, again the indices would reach to nearly 150gb, this would cause cluster to come down as searches got slow.
What do you guys think about such situation?
What should be my cluster architecture for these situation if I have around 100-120 client hitting elastic cluster?
My bad that I cannot provide the logs as the cluster is wiped.
Thank you