"Password authentication failed for elastic"

I'm trying to get a 'near' vanilla deployment of elasticsearch up and running on ECK which I've based off the quick start instructions here

I initially tried a 3 member cluster and saw a lot of connection failed errors, so to narrow the problem scope I dropped it to a 1 member cluster and now I'm seeing this a lot in the logs:

...message": "security index is unavailable. short circuiting retrieval of user [elastic]...
...message": "Authentication to realm file1 failed - Password authentication failed for elastic...

authentication setup seems to have failed in some manner but I don't know much about elasticsearch authentication mechanisms or how ECK is setting it up. Any help is appreciated.

I'm deploying ECK onto kubernetes 1.18.6 (rancher) and here is my yaml:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
  name: es1
  namespace: logging
  version: 7.8.1
  - name: cluster1
    count: 1
      node.master: true
      node.data: true
      node.ingest: true
          hostpath: ssd
        - name: elasticsearch
          - name: ES_JAVA_OPTS
            value: -Xms8g -Xmx8g
              memory: 16Gi
              cpu: 1
              memory: 16Gi
              cpu: 3
    - metadata:
        name: elasticsearch-data
        - ReadWriteOnce
            storage: 100Gi
        storageClassName: openebs-hostpath

ECK uses the file realm internally to authenticate. Is this happening on a fresh 1 node cluster? If so, the ECK logs might also prove useful to troubleshoot

I'm running into this issue as well. Vanilla Kubernetes and when I install an Elastic instance it's successful - green status - but it doesn't create any indices - like the .security index. So I get the above error:

security index is unavailable. short circuiting retrieval of user [elastic]

ECK 1.1 is the version

@data_smith can you share your Elasticsearch resource manifest?
Are you maybe using a different Docker image for Elasticsearch than the official one?

I'm on a closed network that doesn't allow me to move things to the open internet. But I found that if I just rename my cluster things work. So my suspicion is that some artifact is left when you delete an Elastic yaml and this artifact (configmap, secret - not sure which) causes problems. So one quick and dirty solution if you reinstall an new Elastic yaml is to rename it.

I'm using the official, latest version of Elastic. 7.7.1

The garbage collector should delete all of the resources (and does in my tests). One thing that has caused issues with Kibana is having additional services point to it that are named the same thing as config keys (e.g. server), which populates env vars in the pod with values that Kibana can't make sense of. It's possible something similar was happening if changing the name made a difference.

I'm not using Kibana so that's not my issue. I know something is left over and it's not obvious because I checked the secrets and configmaps and didn't see anything obvious there. Not sure. Anyhow there's a workaround - just rename it.

Kibana was just an example of where we had seen it previously, the same thing could theoretically happen for Elasticsearch. If there is something left over after deleting a resource (assuming garbage collection has not been disabled) it would be a previously unidentified bug, so if you do discover something please let us know.

Will do