I've been experimenting with rolling upgrades. Removed 2 nodes of a 4 node
cluster with cluster.routing.allocation.exclude._ip. Let the cluster move
all the shards to the 2 nodes. In then disabled allocation
(cluster.routing.allocation.disable_allocation" : true) and re-included
the 2 excluded nodes. Shards remained on two nodes. Re-enabled allocation,
but the shards remain on only two nodes. How can I force the shards back to
all 4 nodes?
Out of curiosity (this is a test cluster), I restarted one of the two nodes
that had shards.
One index had no replicas, so the shards were divided between 2 nodes. No
issues with this index.
Another index had 1 replica, so both nodes had all the shards. After the
restart, the replicas were set to unassigned, with the restarted node
getting no shards.
Creating a new index, with any number of shards or replicas, will set all
the shards as assigned although there are four nodes in the cluster.
How can I verify the cluster allocation settings?
Cheers,
Ivan
On Thu, Nov 1, 2012 at 1:28 PM, Ivan Brusic ivan@brusic.com wrote:
I've been experimenting with rolling upgrades. Removed 2 nodes of a 4 node
cluster with cluster.routing.allocation.exclude._ip. Let the cluster move
all the shards to the 2 nodes. In then disabled allocation
(cluster.routing.allocation.disable_allocation" : true) and re-included
the 2 excluded nodes. Shards remained on two nodes. Re-enabled allocation,
but the shards remain on only two nodes. How can I force the shards back to
all 4 nodes?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.