Cluster uuid different after cluster migration

Hello everybody,

I performed a cluster migration; I've recreated a kubernetes cluster and reattached the existing elasticsearch ebs volumes (with all datas).

When I recreated the Elastic manifest, it generated a cluster uuid different than the old one. Everyting seems to be working great, no problems so far. But this is a staging cluster and I have to do the same operation on a production one.

Can a different cluster uuid can cause problems for futur operations ?

Thanks in advance

Welcome to our community! :smiley:

What version of Elasticsearch are you running?

We use a version 7.10.1

I would say if the cluster UUID has changed you in fact have created a new cluster and not transplanted your existing cluster (which as I understand it is your goal)

Did you first create the new cluster with the new manifest and then modified it to use the existing volumes? I think for a successful migration the volumes have to be there from the beginning to avoid the formation of a new cluster.

For this to work you have to make sure that PersistentVolumeClaims pointing to your existing volumes but with the right names for the new cluster exist before you create the Elasticsearch resources.

We have a tool here cloud-on-k8s/README.md at 8870369140d2c14a92cb49f1e375b49d161eb761 · elastic/cloud-on-k8s · GitHub that is only meant for disaster recovery purposes (!) that might be helpful in illustrating the idea if you are familiar with Go.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.