The existing shards that had been created before the above cluster setting was applied do not automatically relocate to another node living on a different physical server and I'm not sure whether this is an intentional behaviour?
If I wanted to force same shards living on the same physical server to get relocated to a different physical server - how can I do that?
Take a look at shard allocation awareness. It allows you to tag nodes with a label based on the physical machine that they run on using the node.attr setting. Elasticsearch will then try to distribute copies of the same shard evenly across the different physical machines.
One more question please. I do know about shard allocation awareness feature but also saw that cluster.routing.allocation.same_shard.host is meant to help with a similar purpose - should I use both or in other words is there a reason why we have two features that do a kinda similar thing?
OK, so nodes on the same physical server use the same IP address but different port numbers? How many master-eligible nodes do you have in the cluster? How many nodes do you run per server?
I was playing around just now and looks like one way to force the relocation is to close and re-open the index.
So I think just setting cluster.routing.allocation.same_shard.host would be sufficient and I would not need to add an extra shard awareness setting. For newly created indexes I would not need to do anything but for existing indexes I would need to close and re-open them.
Does the above sound ok guys? Is there a better trick than closing re-opening the index?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.