ECK with NFS shared persistent volume

Did somebody manage to deploy Elasticserach with shared NFS volume (one volume for all pods)? All my nodes are in VMs (above private lab bare metal servers). There is dedicated NFS server created and all k8s nodes have mounts to it.
Most common error is:

ProvisioningFailed persistentvolumeclaim/elasticsearch-data-elasticsearch-candidate-es-master-1 "standard" not found

But volume with storageclass standard is there (kubectl get pv shows it). I managed to deploy one master node only and it sees NFS directory (I see "nodes/0" directory created by ECK) but when I add a new master or data nodes, everything becomes in pending status (pods/containers are not created so I cannot "kubectl logs pod_name"). I tried NFS 3 and 4.

volume definition example:

  name: standard
    storage: 10Gi
  volumeMode: Filesystem
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: standard
    path: /mnt/elasticdata

claim example:

- metadata:
    name: elasticsearch-data
    - ReadWriteMany
        storage: 2Gi
    storageClassName: standard	

BTW, is it good idea to use shared volume for all elasticsearch data nodes?

I've never used NFS + Kubernetes before, so hopefully someone else can chime in. That said, this isn't necessarily ECK-specific, so you may be able to look for resources elsewhere for people using NFS with stateful sets if someone isn't able to help you out here.

Thanks @Anya_Sabo for your response. My other workloads work well with NFS, I am assuming that it's just not supported in ECK yet. Could you please recommend what kind of storage (except public cloud services) to use with ECK? In terms of performance and persistence, would emptyDir work (is it replicated if i use 1+ data nodes) or data goes way with pods ?
Thanks again

@SergeyK we generally recommend not using NFS for Elasticsearch data, mostly because of poor performance.

EmptyDir works, but we also don't recommend it. Every time a Pod gets deleted or recreated (eg. a Pod upgrade or host goes down), you will loose data. Which is fine in some situations since Elasticsearch replication will kick in and restore the missing data, but still risky if you loose several Pods at once.

I would encourage you to look at local PersistentVolumes.
Some options: