Force shard reallocation

Also if I know I have a node I want to take down - can I force all the primaries to migrate to other machines before hand to minimize any effect. This become more crucial when sometimes we have indices without replication (either because of size constraints or ingestion performance - the use case can tolerate failure/restarting.) Even though we can reindex if we need to - if we know we want to take a machine down - we would like to migrate all the primary/singular indices off that node to avoid needless downtime.

I see from that I can reroute specific shard - but then the cluster will rebalance. It seems I can mitigate this - but I'm not sure how much. Further - I would like to do this globally for all shards on this node - not for individual shards.

Is this possible?

you can use the allocation filtering to exclude a node , for example (by ip):

PUT test/_settings
  "index.routing.allocation.exclude._ip": ""


Does this reallocate the index if it's already on the node? I added this setting for one of the indices and it doesn't seem like it's reallocating.

seems like the cluster setting is what I want. From

curl -XPUT localhost:9200/_cluster/settings -d '{
    "transient" : {
        "cluster.routing.allocation.exclude._ip" : ""

I'll try this again - the index level didn't work, but maybe the cluster level will.

It should move the index, if there is another place to put it. If no other node is available due to disk space, all nodes already have a shard copy or other conflicting filtering rules, the shard will not be moved.

Thanks - it worked in the end.

it would be really helpful to have some way to see how it's figuring out what to do with the allocations. There are several settings which can compete against each other which can sometimes result in strange allocations.

Agreed. You can issue a reroute with an explain flag: POST _reroute?explain=true and it should give you some insight as to why shards are not moved.

as usual, elasticsearch has thought of it - I'll try it out.