We recently started to get messages from logstash stating that "Could not index event to Elasticsearch... Validation failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open"
Well, this errors are appearing when we launch an ingest proccess which should create a new index, while other processes who are feeding already created indexes seems to work fine.
So the problem seems to be that we have too many shards allocated on our single node cluster. However, when I query GET _nodes/stats, the result on shards_stats.total_count shows only 526.
The whole topic about shards, space allocation, lifecycle policies, etc, have me a bit confused, and although I'm reading a lot about it these days, I'm still pretty shocked by this error and the 526 total count metric.
Clearly there is something here that I'm not understanding. How can my node get out of available shards if only 526 seems to be allocated?