ES 6.5.2 Shards on the same physical server do not automatically relocate to another server after cluster.routing.allocation.same_shard.host is set to true

Hi Elastic Team,

I have got a number of physical servers, on each of which I run multiple ES data nodes.

Because of this setup, sometimes there would be same replica shards on the same physical server.

I have set the cluster.routing.allocation.same_shard.host to true and it does work but only for newly created shards.

The existing shards that had been created before the above cluster setting was applied do not automatically relocate to another node living on a different physical server and I'm not sure whether this is an intentional behaviour?

If I wanted to force same shards living on the same physical server to get relocated to a different physical server - how can I do that?

Many thanks for you help,

1 Like

Take a look at shard allocation awareness. It allows you to tag nodes with a label based on the physical machine that they run on using the node.attr setting. Elasticsearch will then try to distribute copies of the same shard evenly across the different physical machines.

Many thanks for the reply Abdon,

One more question please. I do know about shard allocation awareness feature but also saw that cluster.routing.allocation.same_shard.host is meant to help with a similar purpose - should I use both or in other words is there a reason why we have two features that do a kinda similar thing?

Are you nodes running side by side directly on the host or are you using containers?

Hi Christian,

I'm not using containers. Nodes are run in the same OS / environment.

Sorry for the bump but any opinion on the above please :slight_smile:

How are your nodes configured? can you share the config of two nodes running on the same host?

Thanks for the reply Chris,

Below is the config for all the data nodes where I tested for the behaviour.

bin/elasticsearch
  -E path.logs=/my/path/logs
  -E discovery.zen.minimum_master_nodes=2
  -E bootstrap.system_call_filter=false
  -E node.max_local_storage_nodes=12
  -E node.ingest=true
  -E node.data=true
  -E node.attr.node.type=hot
  -E discovery.zen.ping.unicast.hosts=hostA,hostB,hostC
  -E thread_pool.bulk.queue_size=1500
  -E network.host=0.0.0.0
  -E path.data=/my/path/data/default
  -E cluster.name=my_cluster
  -E xpack.security.enabled=false
  -E node.name=my_node_name_0
  -E node.master=true

And the cluster settings are:

xpack.monitoring.collection.enabled: true
action.destructive_requires_name: true
cluster.routing.allocation.node_concurrent_recoveries: 50
cluster.routing.allocation.node_initial_primaries_recoveries: 50
script.max_compilations_rate: "200/1m"

OK, so nodes on the same physical server use the same IP address but different port numbers? How many master-eligible nodes do you have in the cluster? How many nodes do you run per server?

This is correct.

This is a test cluster so 3.

For this particular test cluster it is set up as below.

  hostA: 1 default, 1 data, 1 coordinator
  hostB: 1 default, 1 data, 1 coordinator
  hostC: 1 default, 1 coordinator

I was playing around just now and looks like one way to force the relocation is to close and re-open the index.

So I think just setting cluster.routing.allocation.same_shard.host would be sufficient and I would not need to add an extra shard awareness setting. For newly created indexes I would not need to do anything but for existing indexes I would need to close and re-open them.

Does the above sound ok guys? Is there a better trick than closing re-opening the index?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.