Hello,
we want to decomission nodes, i already excluded those nodes from allocation:
"cluster.routing.allocation.exclude._name"
ans also configured each index for the node where it should land (it will be 1 remaining one)
"index.routing.allocation.include._name"
thats ok so far, but now i dont see shards relocate.
if i issue
curl -XPUT http://localhost:9200/_cluster/reroute {"commands": [{ "move": { "index": "logstash-2017.06.02", "shard": 1, "from_node": "A", "to_node": "B" }}]}
i get the following error:
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"
[Fufluns][34.52.56.45:9300][cluster:admin/reroute]"}],"type":
"illegal_argument_exception","reason":"[move_allocation] can't move 1,
from {bacchus}{WaYtCc6yToeYYV_27SBz1g}{6fx-t1-oQ_iHrq_0elxawA}{34.52.56.101}{34.52.56.101:9300},
to {dionysios}{Evs6ZXrhRoin1WryAQdqhw}{QFkKtWzwSBeco-sec-OqDg}{34.52.56.36}{34.52.56.36:9300},
since its not allowed, reason:
[YES(shard has no previous failures)]
[YES(shard is primary and can be allocated)]
[YES(explicitly ignoring any disabling of allocation due to manual allocation commands via the reroute API)]
[YES(target node version [5.5.0] is the same or newer than source node version [5.5.0])]
[YES(no snapshots are currently running)]
[YES(node passes include/exclude/require filters)]
[NO(the shard cannot be allocated to the same node on which a copy of the shard already exists
[[logstash-2017.06.02][1], node[Evs6ZXrhRoin1WryAQdqhw], [R], s[STARTED], a[id=eIzWZl4mSVmX5_sQonrWnQ]])]
[YES(enough disk for shard on node, free: [1.6tb], shard size: [11.7kb], free after allocating shard: [1.6tb])][THROTTLE(reached the limit of incoming shard recoveries [2], cluster setting [cluster.routing.allocation.node_concurrent_incoming_recoveries=2] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries]))]
[YES(total shard limits are disabled: [index: -1, cluster: -1] <= 0)]
[YES(allocation awareness is not enabled, set cluster setting [cluster.routing.allocation.awareness.attributes] to enable it)]"},"status":400}
how would i go about forcing the nodes to release their data and be able to shut them down ?
thanks !