Add data node to existing cluster with 3 masters and 2 other data nodes

Hi,
I have a ES cluster that had 5 nodes, 3 masters and 2 data nodes, all of them version 6.1.1. All was working well, but we wanted to add a third data node. We did add it and it joined the cluster and everything seemed fine. Except that the cluster never rebalanced on it's own (as I would have expected) and did not send anything to that new data node. I checked all the topics with the same question and the solutions do not seem to help my case.

The ES starts just fine, configuration is just same as other data nodes.

The cluster now also appears as Yellow:

when I list the shards they all appear in the same data1 and data2 nodes.
WechatIMG116

What should I do to get some of the shards to move to data3 preferable in a way that can scale over time and I do not have to rebalance manually? If can not be done, how do I do that manually? I have tried many docs that claim to do that, but it does not seem to work. Any thoughts? I am farely new to ES, so advise is welcome.

Hello Abel,

Your newly added data node and older data node having same ES version or different?

Please check.

1 Like

hi Tek_Chand,

All nodes are identical installer, I took a while checking that I was right please see the following output of curl localhost on all hosts:

Thanks for the prompt response.
Abel

You didn't show the configuration for the data3 node, just the m1 master, but I assume you've set data.node: true in elasticsearch.yml on data3 (if not, that would explain the lack of data shards).

The only other thing I can think of that would stop shards from being moved to the new data node is that Cluster Level Shard Allocation has been disabled at some point.

You could for instance try this command:

curl -XPUT "http://localhost:9250/_cluster/settings?pretty" -H "Content-Type: application/json" -d '{
    "transient": {
        "cluster.routing.allocation.enable": "all"
    }
}'

If that has no effect you could try to enable cluster.routing.rebalance.enable in the same manner as above and also cluster.routing.allocation.allow_rebalance to see if any of those have been disabled.

If none of this works, you may want to try the Cluster Allocation Explain API to see if it gives you any hints as to why the shards are not getting allocated to the new data3 node.

1 Like

Because one shard is showing unassigned.

1 Like

HI Bernt_Rostad,

It was cluster.routing.allocation.allow_rebalance
Now a few Shards appear as RELOCATING.

Thanks very much. How could this have changed or is set by default?

Tek_Chand, Yes you are right about the unassigned index, this was the developers breaking stuff on Kibana's monitoring indexes.
I think I can fix that.

Thanks again to both of you. I will do my read up on the subject.
Regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.