Authentication to realm default_file failed

Hi! I have a elasticsearch cluster deployed with ECK. The pods are logging:

level": "WARN", "component": "o.e.x.s.a.AuthenticationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-es-data-6ff4cgt49v", "message": "Authentication to realm default_file failed - Password authentication failed for elastic", "cluster.uuid": "ixv-beimTw-glVn8x3-TEA", "node.id": "guIZqSwRSUm7zU1ZHCu8iw" }

I don't understand de error. Cluster is working (3 master, 3 ingest, 3 data).

Also i have other question. it's possible preserve data volumes from data nodes? I had troubles and I had lost all data. Luckily i had a snapshot of the most important index.

Hey @pabloborh,

Could you share what your Elasticsearch & Kibana yaml manifest look like?
Have you configured the native realm in a particular way?
Which version of ECK are you running? Note there is a bug in version 1.0.0-beta1 that prevents the native realm from being used correctly, see [this workaround].

Also i have other question. it's possible preserve data volumes from data nodes? I had troubles and I had lost all data. Luckily i had a snapshot of the most important index.

Are you talking about reusing data volumes while applying some changes on the cluster? If you are using version 1.0.0-beta1: persistent volumes are reused if you apply a modification to an existing nodeSet.

Or maybe you are talking about keeping data volumes around while the cluster is deleted? ECK automatically deletes PersistentVolumeClaims, but the corresponding PersistentVolumes are kept around if your storage class allows it (see [reclaim policy)(Storage Classes | Kubernetes)). Then, if you want to recreate a new cluster using the same volumes, it's up to you to first create the corresponding PersistentVolumeClaims matching the names of the future Pods.

I'd love to learn more details about what happened to your cluster.

I was using v0.9 operator version. I did a cluster upgrade just for modify requests of the differents pods. But operator couldn't because of node memory limit (In this moment I did several probes trying fix that, applaying changes in request zone). The cluster couldn't modify it so I added more nodes for that.

Operator was saying that it couldn't reconcile the cluster and start loggin problems with the certificate paths. So I started with a very bad workaround. I did an operators upgrade to 1.0-beta but the operators did't get healthy.

In this moment I had 2h of "downtime" and I decided delete operator and install it again. Then, I did a recover using snapshot plugin.

(Now im using again 0.9)

This is the yaml (I've deleted resources because of post limit size)

Cluster definition
 apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
    kind: Elasticsearch
    metadata:
      name: elasticsearch
      namespace: elasticsearch
    spec:
      version: 7.4.0
      nodes:
      - name: master
        nodeCount: 3
        config:
          node.master: true
          node.data: false
          node.ingest: false
          node.ml: false
        podTemplate:
          metadata:
            labels:
              app: elasticsearch-master
          spec:
            nodeSelector:
              node.type: default
            affinity:
              podAntiAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: app
                            operator: In
                            values:
                            - elasticsearch-master
                      topologyKey: "kubernetes.io/hostname"
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: app
                            operator: In
                            values:
                            - elasticsearch-master
                      topologyKey: "failure-domain.beta.kubernetes.io/zone"
            initContainers:
            - name: install-plugins
              command:
              - sh
              - -c
              - |
                bin/elasticsearch-plugin install --batch repository-s3
            containers:
            - name: elasticsearch
              env:
              - name: ES_JAVA_OPTS
                value: -Xms512m -Xmx512m
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "elastic-master"
            resources:
              requests:
                storage: 5Gi
      - name: ingest
        nodeCount: 3
        config:
          node.master: false
          node.data: false
          node.ingest: true
          node.ml: false
        podTemplate:
          metadata:
            labels:
              app: elasticsearch-ingest
          spec:
            tolerations:
            - key: "dedicated"
              operator: "Equal"
              value: "customNode"
              effect: "NoSchedule"
            nodeSelector:
              node.type: elastic
            affinity:
              podAntiAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: "app"
                            operator: In
                            values:
                            - elasticsearch-ingest
                      topologyKey: "kubernetes.io/hostname"
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: "app"
                            operator: In
                            values:
                            - elasticsearch-ingest
                      topologyKey: "failure-domain.beta.kubernetes.io/zone"
            initContainers:
            - name: install-plugins
              command:
              - sh
              - -c
              - |
                bin/elasticsearch-plugin install --batch repository-s3
            containers:
            - name: elasticsearch  
              env:
              - name: ES_JAVA_OPTS
                value: -Xms2g -Xmx2g
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "elastic-data"
            resources:
              requests:
                storage: 1Gi
      - name: data
        nodeCount: 3
        config:
          node.master: false
          node.data: true
          node.ingest: false
          node.ml: false
        podTemplate:
          metadata:
            labels:
              app: elasticsearch-data
          spec:
            tolerations:
            - key: "dedicated"
              operator: "Equal"
              value: "customNode"
              effect: "NoSchedule"
            affinity:
              podAntiAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: "app"
                            operator: In
                            values:
                            - elasticsearch-data
                      topologyKey: "kubernetes.io/hostname"
                  - weight: 1 
                    podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                          - key: "app"
                            operator: In
                            values:
                            - elasticsearch-data
                      topologyKey: "failure-domain.beta.kubernetes.io/zone"
            nodeSelector:
              node.type: elastic
            initContainers:
            - name: install-plugins
              command:
              - sh
              - -c
              - |
                bin/elasticsearch-plugin install --batch repository-s3
            containers:
            - name: elasticsearch
              env:
              - name: ES_JAVA_OPTS
                value: -Xms6g -Xmx6g
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "elastic-data"
            resources:
              requests:
                storage: 700Gi

Hi, did you figure out what's the reason for the authentication issue?
I am using the version 1.15