Master takes another cluster UUID when recovers


I am running an Elasticsearch cluster on the Kubernetes cluster.
My cluster has the following nodes.
Master nodes : 3
Data nodes : 3
Client nodes : minimum 2(they can vary on autoscaling)

My Data nodes have persistence enabled(running as statefulset) rest other nodes are running on non-persistence storage(running as deployment).

There are few cases where my all master pods go down.
From this state when new master pods come up they take some other cluster UUID that is not matching my cluster and the complete cluster goes in an inconsistent state.

Is anyone also facing the same issue?

Master nodes require persistent storage as well as data nodes.

Okay I will try with that as well

Which version of Elasticsearch are you using?


Then you absolutely need persistent volumes for your dedicated master nodes as you otherwise can lose the entire cluster and all the data and have to restore from snapshot if all master nodes are lost at the same time.

Thanks Buddy After Adding persistence it seems to be working now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.