Did somebody manage to deploy Elasticserach with shared NFS volume (one volume for all pods)? All my nodes are in VMs (above private lab bare metal servers). There is dedicated NFS server created and all k8s nodes have mounts to it.
Most common error is:
ProvisioningFailed persistentvolumeclaim/elasticsearch-data-elasticsearch-candidate-es-master-1 storageclass.storage.k8s.io "standard" not found
But volume with storageclass standard is there (kubectl get pv shows it). I managed to deploy one master node only and it sees NFS directory (I see "nodes/0" directory created by ECK) but when I add a new master or data nodes, everything becomes in pending status (pods/containers are not created so I cannot "kubectl logs pod_name"). I tried NFS 3 and 4.
volume definition example:
metadata: name: standard spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: standard nfs: path: /mnt/elasticdata server: 220.127.116.11
volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi storageClassName: standard
BTW, is it good idea to use shared volume for all elasticsearch data nodes?