Unable to expand persistent volumes used by Elasticsearch on Oracle Cloud

I am using block volumes on oracle cloud infrastructure for persistent storage of Elasticsearch(ECK).
I wanted to increase the amount of storage of persistent volume. So, I increased the storage value in volume claim template and later came to know to that oracle doesn't support block volume expansion.
And I was not able to decrease the volume claim as volume claims can only be increased.
So, Elasticsearch pods went into a pending state as the storage couldn't be increased as well as claim couldn't be decreased. I have deleted the Statefulset and the volumes. I want to create a new Statefulset.

Can anyone suggest any workaround here to allow persistent volume expansion on oracle cloud (like what type of volumes could be used etc.)?

I am not familiar enough with Oracle Cloud to suggest alternative volume types that could be used. But it might be viable to install a different storage provisioner that leverages local NVMe disks attached to the individual nodes and supports volume expansion for example with one of the OpenEBS storage engines, or similar projects.

Putting this aside for the moment, then a pure ECK based workaround could be to just create a new nodeSet whenever you need to expand your storage. To give a concrete example consider:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 7.16.0
  nodeSets:
    - name: default
      count: 3
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 50Gi

Once the 50Gi do not suffice anymore you rename the nodeSet and increase the storage (both at the same time of course)

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 7.16.0
  nodeSets:
    - name: expanded
      count: 3
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 100Gi

ECK will take care of spinning up a new StatefulSet behind the scenes with the new bigger volumes and migrate the data automatically for you into the new Elasticsearch nodes and spin down the old nodes. Of course this is not as convenient as the direct volume expansion and it has the downside of additional data transfer from the old nodes to the new nodes which depending on your Cloud provider might incur additional cost. But I wanted to mention this option nonetheless. It is documented here btw: Volume claim templates | Elastic Cloud on Kubernetes [1.9] | Elastic

It worked. Thank you @pebrc Peter Brachwitz.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.