Elastic operator not creating elasticsearch in Openshift 3.11

Hi,

I'm trying to use the Elastic operator to deploy an Elasticsearch instance in a Openshift 3.11 (minishift) cluster. It doesn't seem to work... The creation of Elasticsearch is failing. I see the following k8s events when i describe the Elasticsearch object:

 Normal   AssociationStatusChange  3m16s (x4613 over 8m16s)  es-monitoring-association-controller  Association status changed from [] to []
Warning  ReconciliationError      21m (x32304 over 3h51m)   elasticsearch-controller              Failed to apply spec change: adjust resources: adjust discovery config: Operation cannot be fulfilled on elasticsearches.elasticsearch.k8s.elastic.co "elasticsearch-sample": the object has been modified; please apply your changes to the latest version and try again

I applied the following yaml file to deploy Elasticsearch:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch-sample
spec:
  version: 7.14.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false

The operator logs contain nothing special as far as I can see... Just a lot of messages "starting reconciliation" and "ending reconciliation".

Each time I describe Elasticsearch, the Resource version has increased. It almost seems like the operator is constantly incrementing the resource version and that this is blocking any further deployment progress.

I'm using operator version 1.8.0 (also tried 1.7.0, same result) and trying to deploy Elasticsearch 7.15.1 (also tried 7.14.2, same result). I used the legacy files since openshift 3.11 uses k8s 1.11 (https://download.elastic.co/downloads/eck/1.8.0/operator-legacy.yaml)
I'm configuring everything with the system:admin account.
Deployment of the operator worked without any problems.

Any ideas what could be wrong here?

Thanks!

Hey @Jasper9041, thanks for your question.

Can you double check if you have only a single operator Pod running? the object has been modified errors and constantly increasing resourceVersion could be a symptom of multiple ECK operators trying to orchestrate a single resource. If that's not the case, please use ECK diag tool and provide the tool output to allow for further investigation.

Thanks,
David

Hey @dkow, thanks for your response.

I don't think there is more than one operator installed.
I double-checked all namespaces and I can only find one operator. (See below)

All pods (kubectl get pods --all-namespaces)
NAMESPACE                       NAME                                                      READY   STATUS       RESTARTS   AGE
argocd                          argocd-application-controller-0                           1/1     Running      5          1d
argocd                          argocd-dex-server-7854c9b469-8rxnf                        1/1     Running      1          1d
argocd                          argocd-repo-server-5564d867c6-7k29h                       1/1     Running      1          1d
argocd                          argocd-server-785d9c998c-nbnb7                            1/1     Running      6          1d
default                         docker-registry-1-cwffq                                   1/1     Running      3          5d
default                         persistent-volume-setup-gvprw                             0/1     Completed    0          5d
default                         router-1-vvpdb                                            1/1     Running      3          5d
elastic-system                  elastic-operator-0                                        1/1     Running      3          4d
guestbook-test                  kustomize-guestbook-ui-6b64bf9c76-wdbl5                   1/1     Running      1          1d
kube-dns                        kube-dns-pcxfb                                            1/1     Running      3          5d
kube-proxy                      kube-proxy-cmsqp                                          1/1     Running      3          5d
kube-system                     kube-controller-manager-localhost                         1/1     Running      4          5d
kube-system                     kube-scheduler-localhost                                  1/1     Running      4          5d
kube-system                     master-api-localhost                                      1/1     Running      4          5d
kube-system                     master-etcd-localhost                                     1/1     Running      3          5d
openshift-apiserver             openshift-apiserver-sjnqq                                 1/1     Running      5          5d
openshift-controller-manager    openshift-controller-manager-mtv52                        1/1     Running      4          5d
openshift-core-operators        openshift-service-cert-signer-operator-6d477f986b-2kg9t   1/1     Running      4          5d
openshift-core-operators        openshift-web-console-operator-57986c9c4f-svl8b           1/1     Running      4          5d
openshift-service-cert-signer   apiservice-cabundle-injector-8ffbbb6dc-c7cgn              1/1     Running      3          5d
openshift-service-cert-signer   service-serving-cert-signer-668c45d5f-mxflq               1/1     Running      3          5d
openshift-web-console           webconsole-56d96bbcf4-n64mg                               1/1     Running      4          5d
testing-efk                     elasticsearch-sample-es-default-0                         0/1     Init:Error   44         4d

I also ran the eck-diagnostics tool. I uploaded the results here; eck-diagnostic-2021-10-20T11-44-49.zip - Google Drive

Thanks,
Jasper

Hi Jasper,

There is a bug that affects ECK 1.7/1.8 on minikube which will be fixed in the next version.

Sorry for the inconvenience.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.