I performed a cluster migration; I've recreated a kubernetes cluster and reattached the existing elasticsearch ebs volumes (with all datas).
When I recreated the Elastic manifest, it generated a cluster uuid different than the old one. Everyting seems to be working great, no problems so far. But this is a staging cluster and I have to do the same operation on a production one.
Can a different cluster uuid can cause problems for futur operations ?
I would say if the cluster UUID has changed you in fact have created a new cluster and not transplanted your existing cluster (which as I understand it is your goal)
Did you first create the new cluster with the new manifest and then modified it to use the existing volumes? I think for a successful migration the volumes have to be there from the beginning to avoid the formation of a new cluster.
For this to work you have to make sure that PersistentVolumeClaims pointing to your existing volumes but with the right names for the new cluster exist before you create the Elasticsearch resources.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.