Persistence on ES masters


(Yaron Idan) #1

Hello,
We are running an ES cluster as docker containers running on top of a kubernetes cluster.
Recently we needed to restart the master node, and upon doing so lost the template we created for our indices.
I have launched the data nodes attached to persistent disks in order to allow data to be preserved in case of a crash, but haven't done the same for the ES master node since I was under the impression it is only used for routing requests to the data nodes, and does not require any persistence.

I realize now that my assumption was wrong, and that the masters do hold a certain amount of data that need to be persisted in order to survive a crash of the entire cluster and not lose any data.

My question is -
What is persisted within the ES masters?
Is there a way to configure storing all persistence data in the data nodes, allowing masters to crash and resurrect without losing any configuration?

Thanks,
Yaron.


(Jymit Singh Khondhu) #2

Master nodes must have access to the data/ directory (just like data nodes) as this is where the cluster state is persisted between node restarts. Taken from here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html#master-node


(Yaron Idan) #3

Thanks for the reference.
However, I did not think templates are included in the cluster state.
I expected nodes losing their connectivity when the master is down, since he is the one coordinating the nodes. However - I didn't expect templates to be included in the cluster state and losing them when the master crashes.
Is there any place where I can read more about what is included in this cluster state?


(Jymit Singh Khondhu) #4

I mean, all I'm doing is a google search: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-state.html


(Yaron Idan) #5

Perhaps my question wan't clear.
I've read both of these references before posting the question.
What I'm trying to understand is why did I lose my index template between master shutdown and startup, if it is not included in the cluster state.
Again, thanks for all the help.


(Christian Dahlqvist) #6

How many master eligible nodes do you have? It is recommended to have 3, so that the remaining nodes can elect a new master in case the current master fails.


(Yaron Idan) #7

I am aware of that.
When I experienced the crash I was running a single master node, a problem I've attended to - scaling them up to 3.
Still - I want to be able to recover even from a crash of all 3 masters, which is a possible scenario, unfortunately.


(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.