I am getting an error with sharding along with some exceptions

Hi guys,

I am new to ELK stack, I am getting an error in the Elasticsearch, and I am not sure how to fix this issue. can someone help me.


@stephenb can you please help me to sort it out, or can you please tag some body who can help me .. ?


Please be patient, this is a public forum with many questions / topics and your question is not more important than anyone else's question, this is not paid support. It may take several hours or perhaps a day or more IF someone even choses to answer (generally someone will)

Also and please do not directly @ mention people that have not joined your topic. That is not best practice or really good forum etiquette

And finally please do not post pictures they are hard to read, can not be searched or debugged... please post formatted text

Also did you search this forum for "maximum shards open" there are many existing topics and answered for that question.

I am really sorry stephen, I beg a pardon

1 Like

It OK now you know ... but this is a public forum... we want you and your questions ... we are happy to try to help ... and search is your friend :slight_smile:

yeah thank you, but I am really sorry.....

here you go... :slight_smile:
All Good! Basically you have waay to many shards :slight_smile:

(Dynamic) Limits the total number of primary and replica shards for the cluster. Elasticsearch calculates the limit as follows:

cluster.max_shards_per_node * number of non-frozen data nodes

Shards for closed indices do not count toward this limit. Defaults to 1000 . A cluster with no data nodes is unlimited.

Elasticsearch rejects any request that creates more shards than this limit allows. For example, a cluster with a cluster.max_shards_per_node setting of 100 and three data nodes has a shard limit of 300. If the cluster already contains 296 shards, Elasticsearch rejects any request that adds five or more shards to the cluster.

Notice that frozen shards have their own independent limit.

oh thank you sooo much stephen ....., thanks alot