Resize disk of a stateful set

Hi,

I have a cluster and I want to resize disks in my cluster.
The storage class is "ssd" a custom storage class name with the following definition:

allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      creationTimestamp: "2019-12-17T09:16:59Z"
      name: ssd
      resourceVersion: "17946775"
      selfLink: /apis/storage.k8s.io/v1/storageclasses/ssd
      uid: fa981893-20ad-11ea-bc1e-4201c0a80008
    parameters:
      type: pd-ssd
    provisioner: kubernetes.io/gce-pd
    reclaimPolicy: Delete
    volumeBindingMode: Immediate

So with this storage class I enabled the resize of my volume.
When I try to resize the volume the operator doesn't resize the disk

{
       "level":"error",
       "@timestamp":"2020-02-18T10:27:53.573Z",
       "logger":"controller-runtime.controller",
       "message":"Reconciler error",
       "ver":"1.0.0-beta1-84792e30",
       "controller":"elasticsearch-controller",
       "request":"default/elk-jaeger-datastore",
       "error":"StatefulSet.apps \"elk-jaeger-datastore-es-all-europe-west1-a\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden",
       "errorCauses":[
          {
             "error":"StatefulSet.apps \"elk-jaeger-datastore-es-all-europe-west1-a\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden"
          }
       ],
       "stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.1/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.1/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.2.1/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190404173353-6a84e37a896d/pkg/util/wait/wait.go:88"
    }

So I would like to know if there is a way to resize the disk or a feature in progress ?
If not how can I resize my cluster without dropping my cluster ? ( Adding new pods with the new size then waiting the data migration and delete step by step old pods ?)

Currently there is no easy way to resize existing volumes. See this doc on limitations. Here is the issue you can watch.

The easier workaround is to rename your existing NodeSets in the Elasticsearch specification and give them a different size at the same time.

For example, if you have a NodeSet named "all-europe-west-1a" which specifies 100GB volumes, you can rename that to "all-europe-west-1a-resized", change the volume specification to 150GB, then apply the manifest.
ECK will take care of adding new nodes in "all-europe-west-1a-resized", migrating data away from nodes in "all-europe-west-1a", then remove those nodes.
It might take some time depending on how much data you have there, but at least it's safe and simple.

Not sure about gce-pd provisioner, but we have similar setup where ES is scaled as STS and disk was mounted from hostpath. We have extended the disk size for all the nodes in the cluster to double the size without any issues (using option provided in AWS console).

Ok, I see thank you for the information.
So we have to wait kubernetes give us a better support for disk resizing if I correctly understand ?
But it's a litle weird if the cloud provider give a feature to resize disk ...
It would be very nice to have the support for resize volume.

@Ayush_Mathur you are using the EKS service or custom kubernetes ?

Well, I guess it should work for both. The essential part here is, ES nodes were provided hostpath mounted directory, which is essentially as EBS volume created and attached to EC2 instances. Hence for any disk operations, i.e. shrinking or extending, can be done via cloud provider's console (AWS in our case).
We are not using EKS, so can't provide any details about that, sorry.

late to the party, but FWIW you can resize the volume manually outside of kubernetes (we use Pure, for example and resized inside the Pure flashblade manager). The stack will pick up the change fairly quickly without restarting containers.