Reroute API

I tried to move indices from one node to another using reroute API but I had the following error.

type: illegal_argument_exception
reason: "can't move from node-1 to node-2, , since its not allowed, reason: .. [NO(the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [10.485435061984592%])][YES(below shard recovery limit of outgoing: [0 < 2] incoming: [0 < 2])]

Why is it not allowed? my node-1 has 90% disk usage while my node-2 is at 27%.

What's the output from GET /_cat/nodes?v?

get node

No worries, you can use the code formatter you used above and it'll be readable :slight_smile:

That doesn't match the names in your output?

Sorry, I think it was my error. I misplaced the "from" and "to" node.
My bad.

My question though, is it not possible to move the shards when it already has a replica?
Why only one node is almost full when almost all indices in the hot phase have replica?
Do you have any suggestions on how to free up the disk in my master node beside moving indices?

You cannot allocate a replica on the same node as the primary, no.

We'd need to see _cat/allocation?v to comment.

shards disk.indices disk.used disk.avail disk.total disk.percent  node
   365      158.4gb   163.4gb     16.5gb      180gb           90  instance-0000000001
   326       43.1gb    50.1gb    129.8gb      180gb           27  instance-0000000000
    97      165.7gb   165.7gb    214.2gb      380gb           43  instance-0000000004

Basically what I want to do earlier is to move some indices from instance-01 to instance-00 to make the disk usage not too imbalanced.

Are you using Elastic Cloud?
Are you using hot/cold tiering or other allocation filtering?

Yes I am using elastic cloud, I am using 2 instance for hot and 1 for 1 warm

OK thanks for clarifying! What version are you running?

I would try temporarily removing replicas from a few indices and then seeing if allocation balances out, then add them back.

I am running 7.17

Could you give me some references on how to do that?

PUT indexname/_settings { "index" : { "number_of_replicas" : 0 } }' will do it. You can use wildcards too

Sorry, I don't quite understand what removing the replica will do.
I have checked the replica size and index size. I found that they are the same size. So I think removing the replica, moving the primary shard to another node then making another replica won't change the current status, right?
But if the primary and shard have the same size and most of the indices have replicas, why the disk usage is different between the hot nodes?
I delete some old indices on the warm node yesterday, and voila, the disk usage in a hot node is also decreasing :sweat_smile:

It removes replica copy of the data, so you will only have your primary shards. Then you can move them.

I mean, even if I temporarily remove them, let them get balanced and then added them back as you have suggested, isn't it basically will have the same amount of disk usage since the shard and replica have exact same size?

Yes.
However if you want to balance shard counts between your hot nodes then removing replicas will trigger Elasticsearch to do that. Then you can add replicas back in.

Sorry, I think I misunderstood.
At first, I thought it will gonna help reduce the disk difference between 2 nodes.
Because I don't want to rebalance it, since balanced shards don't mean balanced disk usage, right?

So then my last option is to reduce the time the data is stored in the hot phase by rollover the indices early then move them to warm tier?

If you're talking about balancing between hot and warm, then the only way to do that is to reduce the time your hot shards stay on that tier.

No, I am talking about balancing between 2 hot nodes

Ok, then if you remove the replicas on the hot nodes then Elasticsearch will first try to balance the shard count so it's the same, and will factor in disk use of each node as it does so. You may not end up with 100% matching numbers, but it should be better than what you have now.