Discovery.zen.fd.ping_retries: 5

Is it possible to configure discovery.zen.fd.ping_retries to be forever. here is the usecase I am looking at.

ES is brought up when there is no connectivity between any of the nodes in the cluster and eventually the network is up. . For ex., 10.0.0.1, 10.0.0.2 and 10.0.0.3 are ALL the nodes to be formed as cluster but there is no connectivity between them to begin with. Say even if I configure, say discovery.zen.fd.ping_retries: 5, after 5 retries, it stops communicating with the other node. How to recover from this scenario. One way I can think of is to conigure ping_retries to be infinity or forever.

Appreciate your help.
thanks.
Jala

That sounds strange. I just did a test on my Elastic 6.2.2 devel cluster, configuring one node with an invalid value for discovery.zen.ping.unicast.hosts before restarting it and that node pinged forever (well, until I stopped it after five minutes and 103 pings):

user@node-02:~$ grep "pinging again" logs/devel.log | head -n8
[2018-05-15T09:07:23,884][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:26,888][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:29,891][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:32,895][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:35,901][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:38,904][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:41,907][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again
[2018-05-15T09:07:44,910][WARN ][o.e.d.z.ZenDiscovery     ] [node-02] not enough master nodes discovered during pinging (found [[]], but needed [-1]), pinging again

user@node-02:~$ grep -c "pinging again" logs/devel.log
103

So I can't understand why your nodes stopped pinging after just 5 attempts. What version of Elasticsearch are you running?

Hi Bernt.

Here is the more specific scenario, that may happen in production. I have given t0, t1 and t2 scenarios.

say at t0 : clustersize=1, so min_masters=clustersize+1/2=1 (which is also 1). And say two nodes are coming up with this config, say n0 and n1, discovery.zen.ping.unicast.hosts: "10.0.0.1,10.0.0.2" where n0=10.0.0.1 and n1=10.0.0.2. output shows GREEN with nodes in the cluster, life is good

bash-4.2# curl https://10.0.0.2:9200/_cluster/health?pretty --key certs/admin/server8.key --cert certs/admin/server.crt --cacert certs/admin/cacert.crt
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Say at t1, say the network connectivity is lost between t0 and t1. And the cluster/health shows 'YELLOW'

bash-4.2# curl https://10.0.0.2:9200/_cluster/health?pretty --key certs/admin/server8.key --cert certs/admin/server.crt --cacert certs/admin/cacert.crt
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 1,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 66.66666666666666
}

Say at t2, network connectivity is back up between n0 and n1, still it shows Cluster status as YELLOW and functions with 1 node, even though discovery.zen.ping.unicast.hosts: "10.0.0.1,10.0.0.2". I want these two nodes to be part of cluster cuz the min_masters is still 1, so they are continuing to function in YELLOW state. How can I get these two nodes to be join the cluster. Restarting one of the nodes triggers pings and hence it forms, but is there a way without restarting the cluster.

Elastic "version" : "6.1.1",

thanks.
jala

I'm starting to suspect your problem is not related to the ping_retries setting but to master election and multiple clusters.

If you have more than one master eligible node in the cluster you should never set minimum_master_nodes: 1 as that could lead to multiple clusters and the dreaded "split brain" problem.

For instance, if you have 2 nodes in your cluster and at time t1 loses network connection then the old master will happily continue running as master in its now 1-node cluster while the other node after failing to communicate with the master will promote itself to master in its own 1-node cluster. From then on you will have 2 clusters running in parallel, each with their own sets of data. Which means if you index, update or delete documents in one of them the data in the two clusters will become different, hence the split brain term.

Even after the network connection comes back at t2 the two clusters will continue life on their own, since each now has its own master. The only way to join nodes from one cluster into another is by restarting them, because once the extra master is down the restarted nodes should connect to the correct master that is still running.

However, this may not be so easy to do if the data in the two clusters are different. Say you've indexed document A into primary shard 2 in my_index on cluster 1 while document B was indexed to primary shard 2 in my_index on cluster 2, then there is no way for Elasticsearch to decide which version of primary shard 2 to use for my_index when you try to join the two nodes into one cluster again. The index will then be in a Red state and you won't be able to index into it. I've had this problem myself and was forced to use the Cluster Reroute API to select which primary shard to use. And when you do that, you will lose either document A or B.

So when setting discovery.zen.minimum_master_nodes in your cluster, always use the algorithm

"number of master eligible nodes" / 2 + 1

which, if I understand your example correctly, would be 2/2 + 1 = 2. In this case if you lose connection for a moment, the second node will not be able to promote itself new master because it's alone and would need at least one more eligible master in its cluster.

However, if node 2 has node.master: false in its elasticsearch.yml file the calculation becomes 1/2+1=1 which should work well, because in this case, if you lose network, the master (node 1) will go on serving requests in its 1-node cluster while node 2 will stop serving requests because it can't promote itself to master and without a master it can't do anything.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.