Cluster rebalancing across AZs

We have a cluster with 6 warm data nodes per AZ in EC2, for a total of 18 nodes spread in 3 AZs.
As they were getting full, we added 1 more node to each AZ, and enabled auto-balancing in the cluster.
I expected some of the shards in the 6 nodes in AZ a would move to the new node in AZ a, but instead, I see that some shards from nodes in AZ b are moving to the new node in AZ a.

As moving data across AZs in EC2 incurs in extra costs, I would like to avoid this. Is there an attribute I can set in the nodes to avoid them from rebalancing shards to nodes in other AZs?


Some more info:

  • We are currently using ES 6.8
  • We have this configuration already set in the data nodes:
    cluster.routing.allocation.awareness.attributes' => 'aws_availability_zone'

I will cross-post in StackOverflow:

As far as I know this blog post is still correct, and it explains what is happening in your scenario. Towards the end it describes the process of moving a shard and the fact that this always means copying from the primary shard, which due to the awareness settings in your cluster always will be in a different AZ.

1 Like

You're right. I just did a test moving a replica, and I see the data is coming from the primary's node, not the replica's node.
I had not noticed ES did that with replica shards. Thanks!

Note though, that with auto-balancing on, ES was even moving primary shards across AZ, whereas that could be optimized: if a primary is moved within the same AZ, then no data moves across AZ.

I think the original question still stands valid? is there a way to make ES to rebalance shards only inside of a single AZ?

So far our solution has been to create a custom rebalancing script that rebalances only primary shards within a single AZ.

If a primary is to be moved I believe one of the replicas is promoted before the move is initiated and the shard is then copied from the newly promoted copy.

I just did a test, moving a primary shard didn't promote the replica to primary.
Also, the data traffic went from the old primary's node to the new primary's node. The node where the replica was hosted didn't seem affected.

At the end of the move, the primary was still the newly created shard.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.