Elasticsearch halting when multiple indices are created

I have a cluster of 3 master nodes and 10 data nodes, 31k shards. We use this as a centralized logging platform supporting multiple products (53, to be exact), with logstash creating an index-per-day-per-product. (ie. foo-YYYY.MM.DD, bar-YYYY.MM.DD, etc). Everyday at UTC midnight, Logstash starts indexing into the new indices, which of course creates the new index, allocates the shards, etc. The problem is that at UTC midnight 53 new indices are created all at once, which is turning into an expensive process. We are seeing that indexing straightup halts during that time, sometimes up to 10-15 minutes, which breaks some automation, generates false alerts, etc.

Is there a way to stage index creation, so that all 53 indices are not all created at the exact same time? We would like to keep having an index-per-day, if possible, as it works well for searching and archiving purposes.

That's massive, and waaaaaay to many and likely to be causing the issue.

Which version of Elasticsearch are you on? In Elasticsearch 2.x delta cluster state updates were introduced, which should help if you are already not using it. Before that the entire cluster state (which could be quite large given the large number of shards in the cluster) need to be propagated to all nodes for every update.

I mean, in every other capacity our node/shard ratio isn't an issue, and as I said, this wouldn't be a problem if index creation didn't all happen at once.

And this is version 2.3.3, for reference.

Right, but if you have 310000 shards and then you go to add more, ES needs to calculate where these need to go and then allocate them. That's really expensive.

How large are these shards on average?

You have an extra zero on that :wink: It's 31,000 shards, not 310,000.

A rough estimate of the shard size is maybe 5 mb?

Fair call, it's still way too much and your wasting massive amounts of resources to maintain that.

A 5MB shard and a 50GB shard take the same amount of resources to have open in ES.