Elasticsearch pods stuck in pending: nodes are available: 3 node(s) had volume node affinity conflict,

I am using AKS. (Azure)
I had an Elasticsearch cluster provisioned via ECK 1.8.0 with 3 nodes accross 3 VMs in a nodepool. (AKS)

I wanted to bump up the configuration, so i provisioned a new nodepool with 3 VMs and then changed the Elasticsearch manifest yaml with the new nodepool, and then applied it.

After that, I see that the first pod itself is stuck in pending state with the error :-

0/X nodes are available: 3 node(s) had volume node affinity conflict, X node(s) didn't match Pod's node affinity/selector.

I am expecting it to be scheduled in those 3 nodes that's supposedly having "volume node affinity conflict" as that's the affinity I've set.

here is a part of my Elasticsearch manifest:-

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch
...
...
      spec:
        nodeSelector:
          agentpool: newNodePool
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 128Gi
        storageClassName: managed-premium
...
...

1 Like

Found the issue. May help the others so leaving the solution here.
My previous nodepool had availability zone :None but the new one had Availability zone 1,2 and 3 selected.
Node from one zone can't attach to a PV (managed-premium storage class which provisions Premium_LRS behind the scene) in another zone. hence this error.
Solution was for me to create the new nodepool with the same configuration as the previous one with respect to the availability zones.