Migrate existing elasticsearch cluster running as a java process in linux vms to Kubernetes cluster

Hello,

I am trying to migrate an existing Elasticsearch cluster, which is currently running as a process on Linux VMs, to a Kubernetes cluster.

What I did: I created a new Elasticsearch data node pod in the Kubernetes cluster and connected it to the existing Elasticsearch cluster. Similarly, I created multiple pods for data nodes, performed a rolling update to move all the data nodes to Kubernetes, and then stopped the Elasticsearch data node processes on the VMs. All the data was reallocated to new nodes running on k8s cluster and slowly when data nodes running as a process on VMs were deleted then all the indices were stored in the data nodes running on the K8s cluster.
The problem I am facing: When I try to do the same with the master nodes, I start by running two Elasticsearch master node pods in the Kubernetes cluster and connect them to the existing master nodes. I then stop two of the Elasticsearch master node processes on the VMs. However, when I create a third master node pod in Kubernetes and stop the third master node process (leaving all master nodes running on Kubernetes), the Elasticsearch cluster does not come up. It requires at least one master node process running on a Linux VM.

How can I migrate the master nodes like I did with the data nodes?

Thanks.

What exactly do you mean by this? What do the nodes report in their logs?

The reference manual contains quite a lot of information about troubleshooting this kind of thing. Have you followed any of that guidance?