Elasticsearch 2.4 Shard Rebalancing

Hi all,

Service: Elasticsearch
Number of Nodes: 9
Version: 2.4

We currently run an Elasticsearch cluster under v2.4 that consists of 9 nodes. Each node has a relatively even number of shards on it, and we have not adjusted the settings of https://www.elastic.co/guide/en/elasticsearch/reference/2.4/shards-allocation.html#shards-allocation. With that being said, we do however notice that the primary shards on these nodes do not seem to be well distributed. As said before, we run default permissions so it appears that cluster.routing.rebalance.enable is on all, and from what I have read SHOULD result in primary shard rebalancing....yet there is an index where almost all (outside of ~4 shards out of 30) exist on 4 nodes alone. Furthermore, one of our nodes has 0 primary shards on it, yet we have restarted other nodes in the cluster to try to push some over to it.

My question is - how can i determine when the last time the cluster has rebalanced? Am I correct to assume that the primary shards, if left at default settings, SHOULD rebalance themselves across the cluster? If so - when does the rebalance actually occur?

Please let me know if there is any further information I can provide!

No, that's not what should happen. Primary and replica shards do essentially the same amount of work, so it shouldn't really matter from a balancing point of view whether any given shard copy is a primary or a replica.

I understand that writes eventually hit the replica, but if we are doing heavy writes, wouldn't the primary be doing more than the replica initially?

Also, and please excuse my newbieness here..... but cluster.routing.rebalance.enable specifically points out rebalancing primary, am I misunderstanding what the purpose of this cluster.routing.rebalance.enable then?

No, the small amount of coordination that the primary has to do is pretty cheap compared with the job of actually indexing the documents (tokenisation, analysis, etc) and getting the data safely onto disk, which has to happen on each shard copy whether it be primary or replica.

It's a good point. This setting determines which shards may be moved in order to rebalance the cluster, but this notion of "balance" doesn't distinguish primaries from replicas. I'm not really sure why you'd want to only permit the movement of primaries for rebalancing purposes. This feature dates back many years. I will try and find out.

That makes a bit more sense. Thank you for quickly clarifying that @DavidTurner!

So in this case, in a cluster of 9, we have two nodes that seem to be taking a heavy brunt of the load. Whether that is one index that is heavily written to, or multiple (likely a combination of 2 to 3), how would you suggest we tackle 'balancing' the cluster? Should we increase cluster.routing.allocation.balance.(shard|index|threshold)? Should we just give the node a restart to force an update to the shards (though normally they just pick back up eventually on the restarted node)?

Thanks again for all the assistance and clarification!

How are you observing that these nodes are overloaded?

The first thing to investigate is whether this is a balance problem at all. Do the overloaded nodes have a disproportionate number of shards of the indices you're indexing into?

CPU Load is the number 1 indicator. On top of that, our metrics indicate that the node in question out of the 9 that seems overloaded has about twice the amount of search threads on average than the rest of the nodes.

Is there any other metric you would suggest looking for @DavidTurner? Any insight would be helpful :slight_smile:

edit: I should mention, we have 9 nodes and 3 master nodes. The services using our Elasticsearch cluster hit a load balanced HTTPS endpoint (we dont use direct TCP client atm). So, it could be completely random that we hit the same node over and over again...but that seems a little weird.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.