ECK Fail to update status field of elasticsearch cluster on EKS 1.19

Hi, the operator was installed by rendering the helm chart to manifests and than applied via ArgoCD. It was a first install but i plan to upgrade it by re-rendering the same way and letting Argo apply it ( is that ok ? )

    helm template elastic elastic/eck-operator \
        --version 1.6.0 --kube-version v1.19.0 --dry-run --include-crds \
        --namespace elastic-system \
        --set=installCRDs=true \
        --set=webhook.enabled=true \
        --set=config.logVerbosity="0" \
        --set=config.metricsPort="0" \
        --set=config.caValidity="87600h" \
        --set=config.caRotateBefore="240h" \
        --set=config.certificatesRotateBefore="240h" \
        --set=config.kubeClientTimeout="60s" \
        --set=config.elasticsearchClientTimeout="180s" \
        --set=podMonitor.enabled=false \
        --set=global.createOperatorNamespace=false \
        --set=global.kubeVersion=1.19.0 \

kubectl get crds elasticsearches.elasticsearch.k8s.elastic.co -o yaml | grep -n -A 2 -B 2 'subresources:' gives me no output

checking the upstream all-in-one i can see i should be getting

$ cat CustomResourceDefinition-elasticsearches.elasticsearch.k8s.elastic.co.yaml | grep -n -A 2 -B 2 'subresources:'
42-    singular: elasticsearch
43-  scope: Namespaced
44:  subresources:
45-    status: {}
46-  validation:

but when i check the manifests i rendered from helm i can see is not there ( so not jus tlive but also in my checked in manifests)

I am gonna look into this to figure out what happened.

thanks!