Changing kubernetes storage class for data nodes without data loss

Elasticsearch version: 7.6.2

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T23:34:25Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-065dce", GitCommit:"065dcecfcd2a91bd68a17ee0b5e895088430bd05", GitTreeState:"clean", BuildDate:"2020-07-16T01:44:47Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
$ helm version --tls --tiller-namespace kube-system
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

I am trying to change the storageClassName from aws-ebs-st1 (HDD) to aws-ebs-gp2 (SSD) for all data nodes (note that all coordinator/master nodes are already bound to SSDs) without data loss and I think it might be possible. Here is my proposal:

  1. Delete the data statefulset to rid the statefulset contract but keep the pods (the pods will be "orphaned"):
    kubectl delete statefulset my-elastic-stack-data -n system-logging --cascade=false
    
  2. Make a copy of the rendered helm manifest for each of the data pods:
    for i in {0..11}; do; kubectl get pod my-elastic-stack-data-$i -n system-logging -oyaml > /tmp/data-$i.yaml; done
    
  3. Delete the existing data-11 pod (HDD):
    kubectl delete pod my-elastic-stack-data-11 -n system-logging
    
  4. Delete pvc for data-11 (because the new claim name will be the same):
    kubectl delete pvc my-elastic-stack-data-pvc-11 -n system-logging
    
  5. Wait for the remaining 11 pods (not 12, since we have deleted data-11 ) to rebalance the shards among each other
  6. Somehow create new SSD pv (and pvc ) for data-11
  7. Create new data-11 pod (that will now attach to the SSD pv / pvc created in step 6):
    kubectl -n system-logging apply -f /tmp/data-11.yaml
    
  8. Wait for all 11 pods to rebalance the shards
  9. Repeat steps 3-8 for all remaining data pods from data-10 through data-0
  10. Re-deploy helm chart using the new storageClass and the statefulset should "adopt" the existing pods if conditions are met:

Would love some advice on this. Thanks in advance!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.