ECK, openshift 4.7, volumeClaimDeletePolicy, managed resources

Hi! I'm using private ECK (2.0.0 operator, 8.0.0 es) cloud on openshift 4.7, and have two bugs.

  1. When set volumeClaimDeletePolicy in spec obviously, "oc apply -f" returned me error. Also i get that error when deploy by argocd. But if i try to deploy that as yaml by web console, create is success.
oc apply -f es.yaml
The Elasticsearch "elasticsearch" is invalid: volumeClaimDeletePolicy: Invalid value: "volumeClaimDeletePolicy": volumeClaimDeletePolicy field found in the kubectl.kubernetes.io/last-applied-configuration annotation is unknown. This is often due to incorrect indentation in the manifest.

Test manifest:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: epaas-eck
spec:
  volumeClaimDeletePolicy: DeleteOnScaledownOnly
  version: 8.0.0
  nodeSets:
  - config:
      node.roles: ["master","data"]
    count: 3
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: sas-rwo
    name: master
    podTemplate:
      spec:
        containers:
        - name: elasticsearch
          resources:
            limits:
              cpu: '4'
              memory: 2Gi
            requests:
              cpu: '1'
              memory: 2Gi
        nodeSelector:
          node-role.kubernetes.io/infra-elastic: ''
        tolerations:
        - effect: NoSchedule
          key: infra-elastic
          value: reserved
        - effect: NoExecute
          key: infra-elastic
          value: reserved
  1. Some troubles with updating production ECK cluster from 1.9.1 to 2.0.0 in other openshift cluster. I'm start with mirroring all images of operator, elastic and kibana to local registry, then update operator CRD to 2.0.0. Then i'm update version of CRD Elasticsearch and kibana to 8.0.0 and change images. And after that i have expectations that new images will deploy by rolling update, but nothing happened, and image version in statefullsets didn't change, in spite of sts is managed by operator. After that i'm manually changed image in sts, and restart pods of Elasticsearch and kibana, cluster was update, but it still old version of cluster in Elasticsearch CRD (was 7.16.1). And i don't know how to change that to actually.
    Why statefullset is not resource of Elasticsearch CRD and dont manage update cluster?

    Screenshot from 2022-02-28 15-26-24

I had to delete Elasticsearch crd, and create new, because adding "volumeClaimDeletePolicy" in Elasticsearch occured error with update path, probably due to troubles with version in crd (7.16.1) and real cluster (8.0.0). However i can't create new resource Elasticsearch with "volumeClaimDeletePolicy" by argocd (with same error as like before), and i had to add "volumeClaimDeletePolicy" manually.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.