Azure kubernetes services, azure stack HCI

Hi,
I try to create a persistent volume with kubernetes but I get the following error message:
0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.

AKS (azure kubernetes services) on azure stack hci, so this is a on-premise solution.
I have a 2 node failover cluster, so I try to set this up with local disks (SSD/HDD).

My setup

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hot-pv-volume
spec:
  capacity:
    storage: 200Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: my-local-storage
  local:
    path: D:/eshot/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - moc-xxxx
          - moc-xxxx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: hot-pv-claim
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: my-local-storage
  resources:
    requests:
      storage: 200Gi
---
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: esdeployment01t
spec:
  version: 8.3.0
  nodeSets:
  - name: masters
    count: 3
    config:
      node.roles: ["master"]
      xpack.ml.enabled: true
      node.store.allow_mmap: false
    podTemplate:
      spec:
        containers:
        resources:
          requests:
            memory: 2Gi
            cpu: 0.5
          limits:
            memory: 2Gi
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
            - ReadWriteMany
          resources:
            requests:
              storage: 2Gi
          storageClassName: my-local-storage

PersistentVolumeClaim are created automatically by the StatefulSet controller. Using your example you should have the following ones created, and in a Pending state:

NAME                                              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-esdeployment01t-es-masters-0   Pending                                                                        standard       3s
elasticsearch-data-esdeployment01t-es-masters-1   Pending                                                                        standard       3s
elasticsearch-data-esdeployment01t-es-masters-2   Pending                                                                        standard       3s

Either matching PersistentVolumes already exist, or they are provisioned dynamically. Given that local storage does not support dynamic provisioning (see k8s doc here) then you should create them in advance. Could you check that there's actually 3x matching local PersistentVolumes?

Hi michael,

No, I don't have three persistent volumes only 1. The other ones are in "pending" state.

To be able to use local storage, do I need to create 3 persistent volumes and apply them to my pods manually? How can I create 3 identical PV and apply them to my pods and will the elasticsearch nodes be able to replicate/communicate with each other?

Thank you!

My configuration

Persistent volume claim

NAME                                             STATUS    VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
elasticsearch-data-esdeployment01t-es-master-0   Bound     test-local-pv   50Gi       RWO            local-storage   3m1s
elasticsearch-data-esdeployment01t-es-master-1   Pending                                             local-storage   3m1s
elasticsearch-data-esdeployment01t-es-master-2   Pending                                             local-storage   3m1s

Pods


NAME                          READY   STATUS    RESTARTS   AGE
esdeployment01t-es-master-0   1/1     Running   0          8m56s
esdeployment01t-es-master-1   0/1     Pending   0          8m56s
esdeployment01t-es-master-2   0/1     Pending   0          8m55s

Persistent Volume

NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                    STORAGECLASS    REASON   AGE
test-local-pv   50Gi       RWO            Retain           Bound    default/elasticsearch-data-esdeployment01t-es-master-0   local-storage            26m

Part 2 EDIT
Hi,
I believe I found the solution but I would like to have elastics opinion regarding my solution before implementing this in prod.

Azure stack hci aks have a csi driver plugin which you can use to create a Kubernetes DataDisk resource. These are mounted as ReadWriteOnce, I guess this is the way elasticsearch should function, since each ES-instance is running in its on pod right? Each pod is reading/writing to its own .vhdx disk/file.

link
azure-stack-docs/container-storage-interface-disks.md at main · MicrosoftDocs/azure-stack-docs · GitHub)

We are using storage spaces direct (S2D) and will implement three-way-mirroring, that is 3 nodes with each node having its own copy of the data. Will this setup work or should we use another solution? (NFS/SMB)

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.