we were running ES cluster version 2.4.0 with 3 master, 3data nodes and one client node. this time shards were evenly distributed to all three data nodes. now to increase the storage and computing power we added the two new data node to the cluster. we were assuming that the moment we new data nodes to cluster ES will rebalance the shard allocation across across the cluster. but it didn't happened. old shards are still allocated to old data nodes and the new shards which are being created in the cluster are being only assigned to new data nodes.
we tried restarting complete ES cluster restart but it didn't helped.
then tried to update the bellow settings in hope that ES will try to rebalance the shards across the cluster but it didn't worked.
/_cluster/settings
we checked and found that new nodes are running with 2.3.5 where as old node has 2.4.0. will be upgrading new nodes to 2.4.0. post that will share the outcome.
yes. post upgrading the new nodes to 2.4.0 shards were being distributed to all data nodes. but we found that it is not distributed evenly. for index we have configured 5 shards and 1 replica. so we were expecting that on each data nodes one primary shards out of 5 of index should be allocated and same for replica. it didn't happened. for example for index test, data node1 got 2 primary shards , data node2 got 1 primary and 1 replica, data node3 got 1 primary and 1 replica, data node4 got 1 primary and 1 replica and data node5 got 2 replica shards.
but we want to have single primary shards of index on each data nodes and same for replica shards. how can we re distribute the shards?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.