Hi,
I try to create a persistent volume with kubernetes but I get the following error message:
0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.
AKS (azure kubernetes services) on azure stack hci, so this is a on-premise solution.
I have a 2 node failover cluster, so I try to set this up with local disks (SSD/HDD).
PersistentVolumeClaim are created automatically by the StatefulSet controller. Using your example you should have the following ones created, and in a Pending state:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-esdeployment01t-es-masters-0 Pending standard 3s
elasticsearch-data-esdeployment01t-es-masters-1 Pending standard 3s
elasticsearch-data-esdeployment01t-es-masters-2 Pending standard 3s
Either matching PersistentVolumes already exist, or they are provisioned dynamically. Given that local storage does not support dynamic provisioning (see k8s doc here) then you should create them in advance. Could you check that there's actually 3x matching local PersistentVolumes?
No, I don't have three persistent volumes only 1. The other ones are in "pending" state.
To be able to use local storage, do I need to create 3 persistent volumes and apply them to my pods manually? How can I create 3 identical PV and apply them to my pods and will the elasticsearch nodes be able to replicate/communicate with each other?
Thank you!
My configuration
Persistent volume claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-esdeployment01t-es-master-0 Bound test-local-pv 50Gi RWO local-storage 3m1s
elasticsearch-data-esdeployment01t-es-master-1 Pending local-storage 3m1s
elasticsearch-data-esdeployment01t-es-master-2 Pending local-storage 3m1s
Pods
NAME READY STATUS RESTARTS AGE
esdeployment01t-es-master-0 1/1 Running 0 8m56s
esdeployment01t-es-master-1 0/1 Pending 0 8m56s
esdeployment01t-es-master-2 0/1 Pending 0 8m55s
Persistent Volume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
test-local-pv 50Gi RWO Retain Bound default/elasticsearch-data-esdeployment01t-es-master-0 local-storage 26m
Part 2 EDIT
Hi,
I believe I found the solution but I would like to have elastics opinion regarding my solution before implementing this in prod.
Azure stack hci aks have a csi driver plugin which you can use to create a Kubernetes DataDisk resource. These are mounted as ReadWriteOnce, I guess this is the way elasticsearch should function, since each ES-instance is running in its on pod right? Each pod is reading/writing to its own .vhdx disk/file.
We are using storage spaces direct (S2D) and will implement three-way-mirroring, that is 3 nodes with each node having its own copy of the data. Will this setup work or should we use another solution? (NFS/SMB)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.