I am using AKS. (Azure)
I had an Elasticsearch cluster provisioned via ECK 1.8.0 with 3 nodes accross 3 VMs in a nodepool. (AKS)
I wanted to bump up the configuration, so i provisioned a new nodepool with 3 VMs and then changed the Elasticsearch manifest yaml with the new nodepool, and then applied it.
After that, I see that the first pod itself is stuck in pending state with the error :-
0/X nodes are available: 3 node(s) had volume node affinity conflict, X node(s) didn't match Pod's node affinity/selector.
I am expecting it to be scheduled in those 3 nodes that's supposedly having "volume node affinity conflict" as that's the affinity I've set.
here is a part of my Elasticsearch manifest:-
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch ... ... spec: nodeSelector: agentpool: newNodePool volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 128Gi storageClassName: managed-premium ... ...