Initial master nodes ES 7.0.0

Does all the nodes specified in cluster.initial_master_nodes:
must be up and running before the first election for master occurs?
if yes, any ideas how to do that in kubernetes. As in kubernetes, the statefulset pods are started sequentially.

No, you don't need them all up, but you do need more than half of them to be running.

By default a Kubernetes StatefulSet will start the pods up one at a time and will only start each pod once the previous one is healthy. You can alter this behaviour to avoid the probes by setting podManagementPolicy to Parallel, or else you can account for this in your readiness probe implementation: a pod can be considered ready once it responds to GET /. The Helm chart implements this latter idea.

1 Like

I can try setting podManagementPolicy to Parallel. See if it works with ES setup.

I have tried the readiness probe specified in helm chart,
I had problems when using that probe with data nodes. It was stuck in loop when one data node is upgrading and cluster state turns red. It never turns green until node is back and traffic resumes. While that will only happen if the readiness probe pass which awaits for it to be green.

I think this means you have some indices without any replicas, or else the cluster health wasn't green to start with. A properly-configured cluster should only degrade from green to yellow health when one node is shut down.

However it looks like the readiness probe in the Helm chart is always waiting for green health; maybe it should consider yellow health to be acceptable too to allow for upgrades.

data nodes: 3
index: 1
shards: 3
replicas: 1

I have above mentioned settings, when i do a rolling upgrade on es data nodes, one node completely goes down, and cluster state is no more green, and readiness probe in helm chart do not tolerate that.

How many of your nodes are master eligible?

3 nodes are master eligible with only master role enabled

Yes, I'm going to ask the team about that. It's the green in this line; perhaps yellow would be ok?

I tried with podManagementPolicy to Parallel It still do not work, The situation at the moment is that, I have three master nodes, they all come up and running. Each nodes elects itself as master, and I end up with three clusters with one master node per cluster. The minimum master nodes settings discovery.zen.minimum_master_nodes: 2 also do not have any effect, it shows in logs set to -1. I also could not find this setting in docs of ES7.0.0. Maybe it is set differently now.

This indicates that cluster.initial_master_nodes is set wrongly. From the docs:

You must set cluster.initial_master_nodes to the same list of nodes on each node on which it is set in order to be sure that only a single cluster forms during bootstrapping and therefore to avoid the risk of data loss.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.