I have two active nodes, and I want to move all shards off of one of them, prior to taking it out of service. Because I'm cheap, and not too worried about data loss, I have 0 replicas configured. So I want to use shard allocation filtering to move all the current shards off of that node.
However, this does not take effect. If I use the cluster reroute API I can manually force the shards to migrate. New shards are also allocated only to the node I want. But existing shards are not moved when I set up the filtering. What am I doing wrong? How do I prod the cluster into making this change?
Here's what I do via Sense:
PUT _cluster/settings
{
"transient": {
"cluster": {
"routing": {
"allocation": {
"exclude": {
"_ip": "10.240.0.6"
}
}
}
}
}
}
Does it return a 200 if you issue that? What do your ES logs say?
Why not add a replica, wait till it has replicated, then remove the other node and let ES handle the promotion of the replicas so you have a full dataset.
I could just add the replica, but that will cause my disk to become full - like I said: I'm being cheap. I thought the idea of the shard allocation filtering was that it would proactively cause the routing to happen. That does not seem to be the case.
I don't see this coming up in the logs. They do not seem to show any change when I commit the command. This is the response I get back from the PUT:
I can store it on the one it is going to, but not the other. If creating replicas is a requirement, I can probably work it that way - I'm just surprised the shard filtering doesn't proactively cause primary shards to move.
I have about 500G of data, spread across two nodes. One node is 30% full, because I have a 1TB partition allocated to /opt where I'm keeping the ES store. The other node is 80% full because the same partition is only 500G. If I turn on replicas, then I would expect ES to create 500G of data on both nodes. I can afford it on one, but not the other. My plan was to simply migrate all the data off of the node with the smaller disk, and then shut ES down while I upgrade the disk.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.