Can ES support the modification of the shard role for the ilm history index?

My ES cluster is currently running out of disk space. The new disks are still being purchased. I want to try my best to preserve my data, so I'm using the method of deleting replicas to cope with the current situation.

The index to be operated on is a historical index managed by ilm, and there will only be query operations, no write operations. If I directly set the copy to 0, it will cause the node disk to become unbalanced. If the operation of modifying the shard role is supported, it would be much more convenient. I just need to evenly distribute the main shard to each node, and then set the copy to 0 to achieve my goal :grinning_face: . Currently, the shard distribution of shards is like this:

index                                    shard prirep state       docs   store ip             node
iwc_aime_dialog_record-2025.06.06-000017 0     p      STARTED 36475406 116.9gb 10.201.223.142 iwc-index-aie21-es-data-nodes_10.201.223.142
iwc_aime_dialog_record-2025.06.06-000017 1     p      STARTED 36486635   117gb 10.201.223.142 iwc-index-aie21-es-data-nodes_10.201.223.142
iwc_aime_dialog_record-2025.06.06-000017 2     p      STARTED 36451906 116.8gb 10.201.155.68  iwc-index-aie21-es-data-nodes_10.201.155.68
iwc_aime_dialog_record-2025.06.06-000017 3     p      STARTED 36474449   117gb 10.201.155.68  iwc-index-aie21-es-data-nodes_10.201.155.68
iwc_aime_dialog_record-2025.06.06-000017 4     p      STARTED 36483364   117gb 10.201.150.36  iwc-index-aie21-es-data-nodes_10.201.150.36
iwc_aime_dialog_record-2025.06.06-000017 5     p      STARTED 36505509 117.1gb 10.201.150.36  iwc-index-aie21-es-data-nodes_10.201.150.36
iwc_aime_dialog_record-2025.06.06-000017 0     r      STARTED 36475406 116.9gb 10.201.234.192 iwc-index-aie21-es-data-nodes_10.201.234.192
iwc_aime_dialog_record-2025.06.06-000017 1     r      STARTED 36486635   117gb 10.201.234.192 iwc-index-aie21-es-data-nodes_10.201.234.192
iwc_aime_dialog_record-2025.06.06-000017 2     r      STARTED 36451906 116.8gb 10.201.171.69  iwc-index-aie21-es-data-nodes_10.201.171.69
iwc_aime_dialog_record-2025.06.06-000017 3     r      STARTED 36474449 116.9gb 10.201.171.69  iwc-index-aie21-es-data-nodes_10.201.171.69
iwc_aime_dialog_record-2025.06.06-000017 4     r      STARTED 36483364 116.9gb 10.201.139.80  iwc-index-aie21-es-data-nodes_10.201.139.80
iwc_aime_dialog_record-2025.06.06-000017 5     r      STARTED 36505509 117.1gb 10.201.139.80  iwc-index-aie21-es-data-nodes_10.201.139.80

First thing first:

You can’t decide which node will hold the primary or replica shard but you can move it manually.

To manually move shards from one node to another you can use reroute API call.

POST /_cluster/reroute?metric=none
{
  "commands": [
    {
      "move": {
        "index": "test", "shard": 0,
        "from_node": "node1", "to_node": "node2"
      }
    }
  ]
}

Or If your elasticsearch cluster is accessible by a chrome web browser, you can use elasticvue and do this operation with the help of a UI.

As a results, here is my recommendation:

  1. disable the automatic shard allocations
  2. use the elasticvue and re-allocate the primary shards to specific nodes
  3. Remove the replica (Be aware of the trade-offs)
  4. re-enable the shard allocation

@EricTowns

Perhaps I'm missing something....
If you're saying you are setting replicas to zero.. (dangerous but seems like you understand that. Hopefully you will snapshots)

Why not set The index.routing.allocation.total_shards_per_node: 1

This will ensure each node will a maximum of one primary shard for each index...

You did not mention what version you are on, newer versions do a better job of balancing...

Just to drive by thought.

Hey @stephenb,

Thanks for the answer.

index.routing.allocation.total_shards_per_node is a perfect index-level settings to limit the number of shards per node.

Let me try to clarify the question. The index shards already balanced and all nodes allocated 2 shards from iwc_aime_dialog_record-2025.06.06-000017 index. Some nodes hold 2 primary and some nodes hold 2 replicas. What @EricTowns asking is “keeping exactly one primary shard and one replica per node”. Unfortunately, Elasticsearch doesn’t have any settings for this logic.

Hi @Musab_Dogan

What @EricTowns asking is “keeping exactly one primary shard and one replica per node”. Unfortunately, Elasticsearch doesn’t have any settings for this logic.

Well, actually, say you have 3 Primaries and 1 replica, that will equal 6 total shards.

If you have 6 nodes and you set
index.routing.allocation.total_shards_per_node: 1

You will end up exactly with 1 primary or 1 replica per node

This is a bit risky if you lose a node because then you will have unallocated shards... so we always set Primary = (Number of nodes/2) -1 so in this case we would do 2 primaries.

Not to go in circles, the OP already said / proposing he set replicas to 0...

So this setting

index.routing.allocation.total_shards_per_node: 1 will still distribute the shards no more than 1 primary.

and We still don't know what version we are working with... which can greatly impact allocation.

Sure you can manual route your shards as you explained, just supplying the options

1 Like

Thanks for the explanation. Yes, that’s correct for an index with 3 primary shards and 1 replica in a 6-node cluster.

The current shard distribution:

Node Name Primary Shards Replica Shards
iwc-index-aie21-es-data-nodes_10.201.223.142 2 0
iwc-index-aie21-es-data-nodes_10.201.155.68 2 0
iwc-index-aie21-es-data-nodes_10.201.150.36 2 0
iwc-index-aie21-es-data-nodes_10.201.234.192 0 2
iwc-index-aie21-es-data-nodes_10.201.171.69 0 2
iwc-index-aie21-es-data-nodes_10.201.139.80 0 2

The expected shard distribution:

Node Name Primary Shards Replica Shards
iwc-index-aie21-es-data-nodes_10.201.223.142 1 1
iwc-index-aie21-es-data-nodes_10.201.155.68 1 1
iwc-index-aie21-es-data-nodes_10.201.150.36 1 1
iwc-index-aie21-es-data-nodes_10.201.234.192 1 1
iwc-index-aie21-es-data-nodes_10.201.171.69 1 1
iwc-index-aie21-es-data-nodes_10.201.139.80 1 1

index.routing.allocation.total_shards_per_node settings can’t help if you have 6 primaries and 1 replica.

I believe we are on the same page. @stephenb

Yes, this is exactly the effect I want to achieve. I think at present, only the manual reroute shards can achieve this effect. Thank you both for your suggestions. @Musab_Dogan @stephenb

1 Like

Cool!!
BTW @EricTowns you never mentioned what version you are on...
Please always include that in your questions, Elastic evolves quickly, and releases can have different behaviors.

Thank you for the reminder. I have remembered.:saluting_face: In this case, the ES version I used was 8.7