Hi All, I am looking for the basic configuration for HA/Failover. I have 2 clusters.
The first cluster consists of separate coordinating and master/data nodes running in 2 processes in one system while in the 2nd cluster the Master/Data nodes run as one unit. The configurations are below
I uploaded 1 index and it was done successfully. I have enabled data replication and data is successfully copied to the 2nd cluster. Now, I shutdown the Master/Data node in cluster 1 and was hoping that the 2nd cluster would support the failover. I have added the 2nd cluster node in the zen discovery section of the .yml file. However, it does not work. Any suggestions would be helpful. Thanks
Thanks David. I am still a newbie to this. So pl correct my understanding here
I have the following
1 (call it A1) - Coordinating & Master Nodes in 1 process on Machine A in Cluster 1
1 (call it A2)- Data & Master Nodes in 1 process on Machine A in Cluster 1
1 (call it B1) Master and Data in 1 process on Machine B in Cluster 2 This has been configured with CCR.
Now, I stop A2 hence A1 should link to B1. However, this does not happen.
So if we need to have 3 master eligible nodes, this would mean the following
Potentially having 2 Master nodes and a Data Node in Cluster 1. Total of 3 Nodes
Next, stop the Data Node to mimic a failover scenario
Cluster 1 will automatically fall back on the Node in Cluster 2
No, that's not what should happen. There's no safe way to move a node to a different cluster without risking data loss, so Elasticsearch won't do that. If you stop A2 then you have stopped half of the master-eligible nodes in cluster A, and the only safe way to proceed is to start A2 again.
Why do you not just set up a single cluster with 3 nodes that hold data and are master eligible? This cluster can handle one node failing and is highly available.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.