Possible host awareness when doing shard relocation?

We're running a cluster that each host (physical machine) has 2 es nodes running. On each host, 1 node (hot ) uses the SSD as storage while another ( stale ) uses SATA as storage. We are using shard allocation filter to control the index, so that recent indices are kept on hot nodes and older indices are kept on stale nodes. When some hot indices become "old", we change shard allocation filters so that es cluster automatically relocate them from hot to stale.
However, the relocation is quite random. Suppose 3 hosts, H1 to H3. What I hope to see, is that shards on hot_node_H1 is relocate to stale_node_H1, and so on. But usually shards on hot_node_H1 is relocate to stale_node_H3, and cause what I think unnecessary network flows.
So is there any related config to achieve this, so that in my scenario the relocation may be more efficient ?

ok lemme ask some questions.

  • stale_node_H3 and hot_node_H3 is the same physical machine?
  • given my previous questions answer is yes why would you wanna do this can't you just leave it there?
  • if you move from any given node X to any given node Y we push stuff over the network you mean it could be more efficient to push it through loopback on the same node, correct?
  • if you already prevent the allocation on hot_node_H3 via allocation filters why don't you use these filters to advice the system to move to stale_node_H3?
  • also if you have 2 nodes running on the same physical HW you should also set cluster.routing.allocation.same_shard.host: true to tell elasticsearch to not place any replicas on the same physical machine. If you do that the relocation won't work to the same physical HW.


Thank you for your reply.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.