After kubernetes node restart, data pods couldn't join the cluster again

Hello,
I'm facing an issue with my elasticsearch cluster.
I have 9 pods running in 3 nodes on my kubernetes cluster.
3 elasticsearch-client pod,
3 elasticsearch-data pod,
3 elasticsearch-master pod.
Here my YAML's file:

Everthing works fine when I bootstrap the cluster but at night my 3 kubernetes's nodes is turned off, and at the morning all the 3 kubernetes's nodes is up again.
But when my nodes come up again, the data pods couldn't join to the cluster again.
I'm facing the error "Master not discovery exception".
I'v did some researchs and I found this topic relating the same problem.

And here some important notes:

Based on the Elasticsearch logs, I think this is definitely related to https://github.com/elastic/cloud-on-k8s/issues/1201 .
This can happen when reusing existing persistent volumes, or modifying an existing cluster spec before the cluster is formed.
I think if you delete your cluster, and also delete all existing PersistentVolumeClaims and PersistentVolumes, then recreate your cluster, you should not have this problem.
Definitely something we need to fix in upcoming releases.

Given the current scenario, that my nodes are shut down every night, deleting the cluster and turning it back on is not feasible. Does anyone have any idea of ​​getting around this kind of problem.

Ps: Just deleting the elasticsearch cluster and bootstrap again doesn't work, I have to delete all PV and PVC too.
Ps2: The only values ​​that were changed when I went up in Kubernetes cluster were the resource values.

Based on your yaml manifests it does not look like you're using ECK to manage your Elasticsearch cluster.

From what I can see your master nodes don't mount any volume, but your data nodes do.
When you restart the entire cluster your data nodes expect to join the previous cluster because of their existing data. But that cluster doesn't exist anymore, since your 3 new master nodes formed a new cluster.
You should either reuse data volumes for all nodes, or not reuse volumes at all for any node.

In any case, I suggest you rely on ECK to manage your cluster.