Shard rebalance issue on addition of new data nodes across the cluster

we were running ES cluster version 2.4.0 with 3 master, 3data nodes and one client node. this time shards were evenly distributed to all three data nodes. now to increase the storage and computing power we added the two new data node to the cluster. we were assuming that the moment we new data nodes to cluster ES will rebalance the shard allocation across across the cluster. but it didn't happened. old shards are still allocated to old data nodes and the new shards which are being created in the cluster are being only assigned to new data nodes.

we tried restarting complete ES cluster restart but it didn't helped.

then tried to update the bellow settings in hope that ES will try to rebalance the shards across the cluster but it didn't worked.
/_cluster/settings

{
"persistent" : {
"indices" : {
"breaker" : {
"fielddata" : {
"limit" : "60%"
}
}
}
},
"transient" : {
"cluster" : {
"routing" : {
"rebalance" : {
"enable" : "all"
},
"allocation" : {
"allow_rebalance" : "indices_all_active",
"cluster_concurrent_rebalance" : "5",
"disk" : {
"watermark" : {
"low" : "50%",
"high" : "300gb"
}
},
"balance" : {
"shard" : "1.0f"
}
}
},
"info" : {
"update" : {
"interval" : "1m"
}
}
}
}
}

any suggestions how we can rebalance the shards allocation evenly across all the data nodes?

Are all nodes using exactly the same version of Elasticsearch? Have the new nodes successfully joined the cluster if you check the cat nodes API?

Thanks christian.

we checked and found that new nodes are running with 2.3.5 where as old node has 2.4.0. will be upgrading new nodes to 2.4.0. post that will share the outcome.

Shards on newer nodes can not be moved to older nodes, so that would explain the behaviour you saw.

yes. post upgrading the new nodes to 2.4.0 shards were being distributed to all data nodes. but we found that it is not distributed evenly. for index we have configured 5 shards and 1 replica. so we were expecting that on each data nodes one primary shards out of 5 of index should be allocated and same for replica. it didn't happened. for example for index test, data node1 got 2 primary shards , data node2 got 1 primary and 1 replica, data node3 got 1 primary and 1 replica, data node4 got 1 primary and 1 replica and data node5 got 2 replica shards.

but we want to have single primary shards of index on each data nodes and same for replica shards. how can we re distribute the shards?

You can not control that as far as I know.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.