Amount of nodes

Hi

In the article linked below, I saw that for a daily rollover of indexes and around 5 shards per index my cluster will be hurting after around 6 months unless I have around 15 ES nodes.

Here is the quote:

"If you roll with the defaults for Logstash (daily indices) and ES (5 shards), you could generate up to 890 shards in 6 months. Further, your cluster will be hurting—unless you have 15 nodes or more."

Is this true?

I have a lot of logs entering the ES and heavy logstash calculations, and the ES is lagging in discover and graphs etc.. How can I check if the number of nodes is the reason (I have 6 nodes)?

https://qbox.io/blog/optimizing-elasticsearch-how-many-shards-per-index

Many thanks,
Tomer

I am not sure which article you are regering to as I do not see any link.

How many indices are you creating per day? How large are the shards of these indices? Are all your 6 nodes data nodes? What is their specification?

Hi,

I now edited in the article and quoted the important part.

I have the default rollover of filebeat 1 per day. Amount of shards is also by default 5 shards per node. 2 of the nodes are master and the rest are data. What do you mean "What is their specification?"? Also, I can get to around 15K jsons input per second.

Thanks for the quick reply.

Have a look at this blog post around shards and sharding. The article you linked to seem to primarily discuss search use cases where time based indices are not used, and may therefore not be applicable.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.