Unable to scale down ECK-managed cluster

Hi,
we're running an ECK 1.7.1, Elastic 7.14.1 cluster with 3 nodes on an Azure Kubernetes cluster (v 1.26.6). The cluster is generally running fine. Now, in order to test scaling scenarios, we expanded the cluster to 6 nodes, which worked like a charm. However, if we want to scale it back down to 3 nodes, nothing happens. We've waited for more than 16 hours. The operator logs do not show any reaction to us changing the node count. The cluster state is green, "phase" says "READY". We set the log-level in the operator to 'debug', but still nothing related to scaling down can be seen.
I was looking into updateStrategy and podDisruptionBudget configurations and configured both to allow maximum disturbance to the cluster, but e.g. the PDB resource was not even updated correctly afterwards. What I noticed there is that the operator log repeatedly mentions that there are "no matches for kind 'poddisruptionbudget' in version 'policy/v1beta1'", however I cannot find "policy/v1beta1" in the CRDs that we installed initially or in the PDB resource on the cluster - they're all targeting policy/v1 which is supposedly the correct version for an AKS cluster of version 1.25+.

So, here I am, running out of ideas what could be the reason why that operator seems to be ignoring our request to scale down. Any ideas appreciated.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.