Primary shards not balanced after recovery

I'm using version 1.4.1, 2 nodes,total of 90 indexes ,2 shards for each, loading with 0 replicas and then updating to 1 replica after loading completes as documentation suggests.

It appears that after a node restart/recovery most of the primary shards are located on the same node and the other node has most of the replicas respectively without being rebalanced. Therefore, when creating a new index with 0 replicas, it puts all shards (which are obviously primary) on the same node, the one with more replicas, causing the cluster not to use the other node for the loading process.

Tried fixing it by setting the below setting but that didn't work:
cluster.routing.allocation.balance.index: 0.9f
cluster.routing.allocation.balance.shard: 0.0f
*Tried to make sure it only cares about shard distribution within an index.

Since cluster.routing.allocation.balance.primary was deprecated since 1.3.8, I can't think of anyway to balance the primary shards.
My only solution at this point is to index with 1 replica from the beginning...

Any thought?

1 Like