Multiple elasticsearch-data Volumes

Hi everyone,

I have 3 azure disks which I would like to have the cluster store data to. I want to have a 3 node cluster, each attached to a separate disk. My current config is as follows. However, all the 3 volumes get attached to the first pod and all the rest are left in pending state. Any help will be really appreciated...

My config is as follows

---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
  namespace: testing
spec:
  version: 7.6.2
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: nodes
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
      path:
        data:
          - "/usr/share/elasticsearch/data"
          - "/usr/share/elasticsearch/data1"
          - "/usr/share/elasticsearch/data2"
    podTemplate:
      spec:
        volumes:
          - name: elasticsearch-data
          - name: elasticsearch-data1
          - name: elasticsearch-data2
        containers:
          - name: elasticsearch
            env:
              - name: ES_JAVA_OPTS
                value: "-Xms2g -Xmx2g"
            resources:
              limits:
                cpu: "1"
                memory: 5Gi
              requests:
                cpu: "100m"
                memory: 5Gi
            volumeMounts:
              - name: elasticsearch-data
                mountPath: /usr/share/elasticsearch/data
              - name: elasticsearch-data1
                mountPath: /usr/share/elasticsearch/data1
              - name: elasticsearch-data2
                mountPath: /usr/share/elasticsearch/data2
        initContainers:
          - name: chown-data-volumes
            command: ["sh", "-c", "chown elasticsearch:elasticsearch /usr/share/elasticsearch/data && chown elasticsearch:elasticsearch /usr/share/elasticsearch/data1 && chown elasticsearch:elasticsearch /usr/share/elasticsearch/data2"]
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 256Gi
        storageClassName: standard
    - metadata:
        name: elasticsearch-data1
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 256Gi
        storageClassName: standard
    - metadata:
        name: elasticsearch-data2
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 256Gi
        storageClassName: standard

You can probably inspect the existing PVCs and Pending Pods in your cluster to get more details:
kubectl get pvc
kubectl describe pvc <pvc-name>
kubectl describe pod <pending-pod-name>

I guess something prevents more PVs from being provisioned.

Hi Sebastian,

Thanks for the reply. I was able to fix the issue myself. The problem was with the way I was creating the persistent volumes. I had to create a Storage Class that was able to take care of dynamic provisioning of volumes. The following config worked for me..

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: eck-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: elasticsearch-data-a
spec:
  accessModes:
  - ReadWriteOnce
  azureDisk:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: elasticsearch-data-b
spec:
  accessModes:
  - ReadWriteOnce
  azureDisk:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  finalizers:
  - kubernetes.io/pv-protection
  name: elasticsearch-data-c
spec:
  accessModes:
  - ReadWriteOnce
  azureDisk:
    cachingMode: ReadOnly
    diskName: <your_disk_name>
    diskURI: <disk_uri>

---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es-ha-cluster
spec:
  version: 7.6.2
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  nodeSets:
  - name: nodes
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    podTemplate:
      spec:
        containers:
          - name: elasticsearch
            env:
              - name: ES_JAVA_OPTS
                value: "-Xms2g -Xmx2g"
            resources:
              limits:
                cpu: "1"
                memory: 5Gi
              requests:
                cpu: "100m"
                memory: 5Gi
            volumeMounts:
              - name: elasticsearch-data
                mountPath: /usr/share/elasticsearch/data
        initContainers:
          - name: chown-data-volumes
            command: ["sh", "-c", "chown elasticsearch:elasticsearch /usr/share/elasticsearch/data"]
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 256Gi
        storageClassName: eck-storage