Hi everyone,
I have the following situation, my cluster is formed by 3 master nodes and 3 data nodes, unfortunately due to a hardware problem we lost 2 master nodes at same time and the left one was not able to assume as master.
And now when I recreated those 2 master nodes, with the same node names and IPs, they are not able to join into the cluster, and I start to get the following message when I start all 3 master nodes:
[masternode-2] received cluster state from {masternode-3}{MZkz2i3qQYCpivdepDtACA}{4naaRVrvQG6_aLXw5i9qsA}{xxx.xxx.xxx.xxx}{xxx.xxx.xxx.xxx:9300}{cdhilmrstw}{ml.machine_memory=20994146304, ml.max_open_jobs=20, xpack.installed=true, transform.node=true} with a different cluster uuid F_KaGI1GR2Kw2oPmS-czmA than local cluster uuid L4iw4uQhQiCStSUtadLjQg, rejecting
masternode-2 is the survival one, masternode-1 and masternode-3 are the ones we lost, it seems that the latest cluster state had masternode-3 as the master one and when it tries to join we get that message.
Any hint on how to recover this cluster without losing the data?