Way tooooo many shards


(David Tsang) #1

Hi,

I have been re-architecting my elastic cluster from a wee fledgling to this:
2 coordinating nodes
3 master nodes
12 data nodes

In the process, I seem to have become the proud owner of 170 shards per index ! ... and at > 4000 indices, that's a lotta shards baby.

Obviously this is not right.

I have > 3500 shards that won't assign.

The stack is running normally, not as quick as it used to be, but not slow however I am concerned.

What have I missed.

Any help would be greatly appreciated.

Thanks in advance


(Shane Connelly) #2

Yikes! That is a lot of shards!

What have I missed.

One plausible reason is you have one or more index templates which are configured poorly. I'd look there (and any Logstash/Beats output configuration) and update them to a more sane number (certainly nothing larger than 12).

As for resolution to getting the shards to a reasonable number, you'll want to look whether these are primary shards or replica shards. If they're primary shards, there is a shrink API that you could use. You could also just delete high-shard indices if they aren't valuable anymore. You may, instead, simply have an extremely high (misconfigured) number of replicas, in which case you can just change the number of replicas for each index to a more reasonable number.

There's also a cluster allocation explain API which can give you insights into why a shard may not be assigned.


(David Tsang) #3

Thanks Shanec, I'll certainly look at your recommendations


(Christian Dahlqvist) #4

If you have not already seen it, have a look at this blog post around shards and sharding guidelines as well.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.