Volume expansion

I was deploying elasticsearch cluster as statefulset. I upgraded elastic-operator to 1.3.0. I can see that volume expansion is supported for 1.3.0 release:

But still I am seeing the old storage size. Is volume expansion supported for elastic-operator:1.3.0 release?

Yes, it is supported starting 1.3.0.

As indicated by the docs, it will only work if your storage provisioner itself supports volume expansion. You also need the storage class to have allowVolumeExpansion: true.

If it does not work for you, can you please provide your elasticsearch resource yml manifest, along with the manifest of the storage class you are using, and the PVCs & PVs resources yml manifests?

Below is the custom resource file, we are using:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch-sample
spec:
  version: 7.9.0
  nodeSets:
  - name: master
    count: 1
    config:
      # most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
      node.master: true
      node.data: false
      node.ingest: false
      node.ml: false
      # this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
      node.store.allow_mmap: false
    podTemplate:
      metadata:
        labels:
          # additional labels for pods
          foo: bar
      spec:
        containers:
        - name: elasticsearch
          # specify resource limits and requests
          resources:
            limits:
              memory: 5Gi
              cpu: 2
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms2g -Xmx2g"
  - name: data-ingest  
    count: 1
    config:
      node.master: false
      node.data: true
      node.ingest: true
      node.ml: true
      # this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
      node.store.allow_mmap: false
    podTemplate:
      metadata:
        labels:
          # additional labels for pods
          foo: bar
      spec:
        containers:
        - name: elasticsearch
          # specify resource limits and requests
          resources:
            limits:
              memory: 5Gi
              cpu: 2
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms2g -Xmx2g"
  #   # request 2Gi of persistent data storage for pods in this topology element
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
         accessModes:
         - ReadWriteOnce
         resources:
           requests:
             storage: 3Gi
         storageClassName: standard

We are not getting error, but showing the old storage value.
When we try to pass " allowVolumeExpansion: true" in elasticsearch-cr. Getting some error like that filed is unknown.

@knagasri can you show:

  • your storage class named standard? (kubectl get storageclass standard -o yaml)
  • a PersistentVolumeClaim generated for one of the data nodes? (kubectl get pvc elasticsearch-data-elasticsearch-sample-es-data-ingest-0 -o yaml)
  • the PersistentVolume matching that PVC? (kubectl get pv <volumeName-specified-in-the-pvc-spec> -o yaml)

kubectl get storageclass standard -o yaml

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"standard"},"parameters":{"fstype":"ext4","replication-type":"none","type":"pd-standard"},"provisioner":"kubernetes.io/gce-pd","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2020-06-10T21:45:02Z"
  name: standard
  resourceVersion: "1343166456"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/standard
  uid: 1d7640a1-8180-4ba3-9fbf-823550e6b7c9
parameters:
  fstype: ext4
  replication-type: none
  type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

( kubectl get pvc elasticsearch-data-elasticsearch-sample-es-data-ingest-0 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
    volume.kubernetes.io/selected-node: gke-gtso-enter-gke-g-qa-testing-node--eb5540ea-kjmh
  creationTimestamp: "2020-12-09T12:19:44Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch-config
    elasticsearch.k8s.elastic.co/statefulset-name: elasticsearch-config-es-data-ingest
  name: elasticsearch-data-elasticsearch-config-es-data-ingest-0
  namespace: es-upgrade1
  ownerReferences:
  - apiVersion: elasticsearch.k8s.elastic.co/v1
    blockOwnerDeletion: false
    controller: true
    kind: Elasticsearch
    name: elasticsearch-config
    uid: ba193be3-cbd4-491a-8091-5e67ec596527
  resourceVersion: "1813337381"
  selfLink: /api/v1/namespaces/es-upgrade1/persistentvolumeclaims/elasticsearch-data-elasticsearch-config-es-data-ingest-0
  uid: 653e71ac-e2a0-4d70-ab47-b2474544998e
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: standard
  volumeMode: Filesystem
  volumeName: pvc-653e71ac-e2a0-4d70-ab47-b2474544998e
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  phase: Bound

Am I reading correctly that your volume is already 100Gi?
In that case a resize to 3Gi as requested in your Elasticsearch spec would not work (you can only increase the size).
Or is the pvc manifest you're showing me corresponding to a different Elasticsearch manifest? This 3Gi/100Gi mismatch seems strange.

yes, I pasted different one. I deployed new elasticsearch cluster with new storage value . Even I am not able to increase the size. Is this because of statefulset?

@knagasri can you please paste an Elasticsearch manifest, along with a pvc manifest where that happens?
Also: which Kubernetes version are you running?

Below is my elasticsearch manifest and in this yaml only, defined storage section as well

# This sample sets up an Elasticsearch cluster with 2 master and 3 data-ingest nodes.
elasticsearchYaml: |-
  apiVersion: elasticsearch.k8s.elastic.co/v1
  kind: Elasticsearch
  metadata:
    name: elasticsearch-config
  spec:
    version: 7.9.0
    nodeSets:
    - name: master
      count: 2
      config:
        node.master: true
        node.data: false
        node.ingest: false
        node.ml: true
        node.store.allow_mmap: false
        # for release 7.4.0 and 7.6.0 uncomment following line for filerealm config
        xpack.security.authc.realms.file.file1.order: 0
        # for release 6.8.8 uncomment following line for filerealm config
        #xpack.security.authc.realms.file1.type: file
        #xpack.security.authc.realms.file1.order: 0
        http.compression: true
        http.compression_level: 9
      podTemplate:
        metadata:
          labels:
            # additional labels for pods
            master: node
        spec:
          # This init container can be used to install any plugin in elasticsearch
          initContainers:
          - name: install-plugins
            command:
            - sh
            - -c
            - |
              bin/elasticsearch-plugin install --batch repository-gcs
          containers:
          - name: elasticsearch
            # specify resource limits and requests
            resources:
              limits:
                memory: 6Gi
                cpu: 2
            env:
            - name: ES_JAVA_OPTS
              value: "-Xms4g -Xmx4g"   
      volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 400Gi
          storageClassName: standard   
    - name: data-ingest
      count: 3
      config:
        node.master: false
        node.data: true
        node.ingest: true
        node.store.allow_mmap: false
        xpack.security.enabled: true
        # for release 7.4.0 and 7.6.0
        xpack.security.authc.realms.file.file1.order: 0
        # for release 6.8.8
        #xpack.security.authc.realms.file1.type: file
        #xpack.security.authc.realms.file1.order: 0 
        http.compression: true
        http.compression_level: 9
      podTemplate:
        metadata:
          labels:
            # additional labels for pods
            data: node
        spec:
          initContainers:
          - name: install-plugins
            command:
            - sh
            - -c
            - |
              bin/elasticsearch-plugin install --batch repository-gcs
          containers:
          - name: elasticsearch
            # specify resource limits and requests
            resources:
              limits:
                memory: 6Gi
                cpu: 2
            env:
            - name: ES_JAVA_OPTS
              value: "-Xms4g -Xmx4g"
      volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 100Gi -----> storage
          storageClassName: standard
     

kubernetes version: Master version
1.16.13

Sorry to reiterate my question: can you please also paste the PVC resource that matches that Elasticsearch manifest? (kubectl get pvc elasticsearch-data-elasticsearch-sample-es-data-ingest-0 -o yaml )

kubectl -n es-upgrade2 get pvc elasticsearch-data-elasticsearch-config-es-data-ingest-0 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
    volume.kubernetes.io/selected-node: gke-gtso-enter-gke-g-qa-testing-node--e12e2f7f-c4bf
    volume.kubernetes.io/storage-resizer: kubernetes.io/gce-pd
  creationTimestamp: "2020-12-14T13:44:32Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch-config
    elasticsearch.k8s.elastic.co/statefulset-name: elasticsearch-config-es-data-ingest
  name: elasticsearch-data-elasticsearch-config-es-data-ingest-0
  namespace: es-upgrade2
  ownerReferences:
  - apiVersion: elasticsearch.k8s.elastic.co/v1
    blockOwnerDeletion: false
    controller: true
    kind: Elasticsearch
    name: elasticsearch-config
    uid: e2c909bf-caa7-426e-a2ba-5ac9099e7c8d
  resourceVersion: "1892698049"
  selfLink: /api/v1/namespaces/es-upgrade2/persistentvolumeclaims/elasticsearch-data-elasticsearch-config-es-data-ingest-0
  uid: a274d50e-c2dd-487d-bde7-2285b5c5b4ff
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: standard
  volumeMode: Filesystem
  volumeName: pvc-a274d50e-c2dd-487d-bde7-2285b5c5b4ff
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 100Gi
  phase: Bound

@knagasri the PVC says your volume is 100Gi. The Elasticsearch manifest requests a volume of 100Gi.
Are those the manifests where you tried to resize the volume?

I am getting error in operator pod like below:

{"log.level":"error","@timestamp":"2020-12-17T11:10:27.359Z","log.logger":"controller-runtime.controller","message":"Reconciler error","service.version":"1.1.0-29e7447f","service.type":"eck","ecs.version":"1.4.0","controller":"elasticsearch-controller","request":"es-upgrade1/elasticsearch-config","error":"StatefulSet.apps \"elasticsearch-config-es-data-ingest\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden","errorCauses":[{"error":"StatefulSet.apps \"elasticsearch-config-es-data-ingest\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden"}],"error.stack_trace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:88"}

Can anyone know, did eck 1.3.0 supports volume expansion for statefulset deployments?

@sebgl, yes those are the manifests which I tried to resize the volume. Can you help me on this

@knagasri I'm sorry but I think there is a mismatch in the manifests you provided.
You request 100Gi and you get 100Gi so I don't see a problem there.

What I would like to see is:

  • an Elasticsearch manifest where size Y is requested (resized from the initial size X)
  • a corresponding PersistentVolumeClaim where you don't get size Y, but still size X

Actually, I provided the correct manifest only. In the operator logs I was getting error that is like permission to change pvc. It is now sorted out. Thanks