Logstash stopped to treat the incoming logs

Hello,

I'm using ELK stack to correlate logs of network devices. The stack was perfectly working for a long period of time. But today logstash stopped treating the incoming logs.
When i did an investigation i found this error in the log files :

"error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"}}}}

You can see also the elasticsearch configuration in the screenshot below.

elasticsearch

Please how can i resolve this problem?
Thanks in advance for your help.

Elasticsearch has a default self-imposed limit of 1000 shards per node in the Elasticsearch cluster.

You're cluster is "over sharded" ... shards are a fundamental unit of scalability and sizing in Elasticsearch. Each open shard (and replica) consumes heap.

What you're very very likely to find (because I'm dealing the same issue myself) is that you'll be using daily indices. This should perhaps be considered an outmoded design because it means that shards are poorly sized; you might find that some shards are very small, and maybe some are overly large. There are sizing guidelines for this... if you know to look for them.

Short term action: consider closing (deleting?) old indices, or reducing replicas. You can also add another node.

Longer term you really need to move towards using ILM instead of daily indices; this way your shards can be rotated based on their size. Even better would be to combine ILM with Data Streams.... I understand this is the recommended thing anyway (in Elasticsearch at least)... the Logstash setup documentation could perhaps be better in this regard.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.