Can such cluster work fine?

I am aware that there is no definite answer to the question how many shards, nodes etc should my cluster have, but I would like to ask you if the numbers I will show you might be normal or is this extremely bad organized cluster. So...
I have 5 nodes in my cluster, 3 masters (4GB RAM each), 2 data nodes (16GB RAM each). Java heap size set to half of the host RAM.
I monitor serveral VMs using metricbeat and heartbeat and they create daily indices. They are roughly 300MB after each day.
I have logstash with index lifecycle to keep 50GB of data.
I also have daily indices for keeping some other metrics (one that creates daily index of size 1,5GB and one that creates only 50 MB of data daily).
In total I have 104 indices, 320 shards and 100,223,812 documents (this data is after removing large amount of old data).

Can such cluster work fine? After some time without deleting old index (2weeks or so), Elastic Stack starts to be laggy, Kibana tends to show that elasticsearch plugin timed out (30s). After removing data it is working fine again.

What can help in this situation? Should I add more data nodes? Should I create weekly indices instead of daily?

And extra question: some of the indices, that were created simply by sending data, have default mapping, which has 5 shards. Does it make sens in such cluster?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.