My ES cluster is currently running out of disk space. The new disks are still being purchased. I want to try my best to preserve my data, so I'm using the method of deleting replicas to cope with the current situation.
The index to be operated on is a historical index managed by ilm, and there will only be query operations, no write operations. If I directly set the copy to 0, it will cause the node disk to become unbalanced. If the operation of modifying the shard role is supported, it would be much more convenient. I just need to evenly distribute the main shard to each node, and then set the copy to 0 to achieve my goal . Currently, the shard distribution of shards is like this:
index shard prirep state docs store ip node
iwc_aime_dialog_record-2025.06.06-000017 0 p STARTED 36475406 116.9gb 10.201.223.142 iwc-index-aie21-es-data-nodes_10.201.223.142
iwc_aime_dialog_record-2025.06.06-000017 1 p STARTED 36486635 117gb 10.201.223.142 iwc-index-aie21-es-data-nodes_10.201.223.142
iwc_aime_dialog_record-2025.06.06-000017 2 p STARTED 36451906 116.8gb 10.201.155.68 iwc-index-aie21-es-data-nodes_10.201.155.68
iwc_aime_dialog_record-2025.06.06-000017 3 p STARTED 36474449 117gb 10.201.155.68 iwc-index-aie21-es-data-nodes_10.201.155.68
iwc_aime_dialog_record-2025.06.06-000017 4 p STARTED 36483364 117gb 10.201.150.36 iwc-index-aie21-es-data-nodes_10.201.150.36
iwc_aime_dialog_record-2025.06.06-000017 5 p STARTED 36505509 117.1gb 10.201.150.36 iwc-index-aie21-es-data-nodes_10.201.150.36
iwc_aime_dialog_record-2025.06.06-000017 0 r STARTED 36475406 116.9gb 10.201.234.192 iwc-index-aie21-es-data-nodes_10.201.234.192
iwc_aime_dialog_record-2025.06.06-000017 1 r STARTED 36486635 117gb 10.201.234.192 iwc-index-aie21-es-data-nodes_10.201.234.192
iwc_aime_dialog_record-2025.06.06-000017 2 r STARTED 36451906 116.8gb 10.201.171.69 iwc-index-aie21-es-data-nodes_10.201.171.69
iwc_aime_dialog_record-2025.06.06-000017 3 r STARTED 36474449 116.9gb 10.201.171.69 iwc-index-aie21-es-data-nodes_10.201.171.69
iwc_aime_dialog_record-2025.06.06-000017 4 r STARTED 36483364 116.9gb 10.201.139.80 iwc-index-aie21-es-data-nodes_10.201.139.80
iwc_aime_dialog_record-2025.06.06-000017 5 r STARTED 36505509 117.1gb 10.201.139.80 iwc-index-aie21-es-data-nodes_10.201.139.80
Perhaps I'm missing something....
If you're saying you are setting replicas to zero.. (dangerous but seems like you understand that. Hopefully you will snapshots)
Why not set The index.routing.allocation.total_shards_per_node: 1
This will ensure each node will a maximum of one primary shard for each index...
You did not mention what version you are on, newer versions do a better job of balancing...
index.routing.allocation.total_shards_per_node is a perfect index-level settings to limit the number of shards per node.
Let me try to clarify the question. The index shards already balanced and all nodes allocated 2 shards from iwc_aime_dialog_record-2025.06.06-000017 index. Some nodes hold 2 primary and some nodes hold 2 replicas. What @EricTowns asking is “keeping exactly one primary shard and one replica per node”. Unfortunately, Elasticsearch doesn’t have any settings for this logic.
What @EricTowns asking is “keeping exactly one primary shard and one replica per node”. Unfortunately, Elasticsearch doesn’t have any settings for this logic.
Well, actually, say you have 3 Primaries and 1 replica, that will equal 6 total shards.
If you have 6 nodes and you set index.routing.allocation.total_shards_per_node: 1
You will end up exactly with 1 primary or 1 replica per node
This is a bit risky if you lose a node because then you will have unallocated shards... so we always set Primary = (Number of nodes/2) -1 so in this case we would do 2 primaries.
Not to go in circles, the OP already said / proposing he set replicas to 0...
So this setting
index.routing.allocation.total_shards_per_node: 1 will still distribute the shards no more than 1 primary.
and We still don't know what version we are working with... which can greatly impact allocation.
Sure you can manual route your shards as you explained, just supplying the options
Yes, this is exactly the effect I want to achieve. I think at present, only the manual reroute shards can achieve this effect. Thank you both for your suggestions. @Musab_Dogan@stephenb
Cool!!
BTW @EricTowns you never mentioned what version you are on...
Please always include that in your questions, Elastic evolves quickly, and releases can have different behaviors.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.