Where Elasticsearch stores kibana users?

I'm Investigating peculiar behavior which kibana lose connection after a few day after installation (It connects well to elasticsearch right after the installation)

My Environment:
ECK (kubernetes).

What I did:

  1. I first checked k8s service. I could connect the elasticsearch with default "elastic" user from test-pod.
  2. I checked my kibana.yml
elasticsearch:
  hosts:
  - https://quickstart-es-http.sy-elastic.svc:9200
  password: <the password>
  ssl:
    certificateAuthorities: /usr/share/kibana/config/elasticsearch-certs/ca.crt
    verificationMode: certificate
  username: sy-elastic-quickstart-kibana-user
  1. so I tried to find sy-elastic-quickstart-kibana-user is still exist in elasticsearch and the password is correct
curl -u "elastic:<elastic account's password>" -k "https://quickstart-es-http.sy-elastic.s
vc.cluster.local:9200/_security/user"

and it returns empty dictionary!

Where can I find "sy-elastic-quickstart-kibana-user"??

I believe all users are stored in the .security index so if users configuration go missing it may indicate that there is something wrong with your Elasticsearch cluster. Does your Elasticsearch pods have persistent storage? Which version of Elasticsearch are you using?

Thanks for the reply!
Elasticsearch 7.6.2
and Yes! one PV with 1Gi is bound
and I checked the bound location, It has 99% of free space.

by the way I found another peculiar behavior...
I deleted the elasticsearch pod, and the replicaset created the new one. (replica cnt is 1)
but newly created pod couldn't become the master.

I 2020-09-04T08:26:59.381794101Z [controller/87] [Main.cc@110] controller (64 bit): Version 7.6.2 (Build e06ef9d86d5332) Copyright (c) 2020 Elasticsearch BV 
I 2020-09-04T08:27:01.182674765Z Using REST wrapper from plugin org.elasticsearch.xpack.security.Security 
I 2020-09-04T08:27:01.687244232Z using discovery type [zen] and seed hosts providers [settings, file] 
I 2020-09-04T08:27:04.864704868Z initialized 
I 2020-09-04T08:27:04.871639086Z starting ... 
I 2020-09-04T08:27:05.295816650Z publish_address {10.52.0.209:9300}, bound_addresses {0.0.0.0:9300} 
I 2020-09-04T08:27:06.080438894Z bound or publishing to a non-loopback address, enforcing bootstrap checks 
I 2020-09-04T08:27:06.895612719Z [gc][2] overhead, spent [265ms] collecting in the last [1s] 
I 2020-09-04T08:27:16.184743529Z master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 
I 2020-09-04T08:27:26.184256420Z master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 
I 2020-09-04T08:27:36.188875120Z master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 
I 2020-09-04T08:27:36.217100276Z timed out while waiting for initial discovery state - timeout: 30s 
I 2020-09-04T08:27:36.328004946Z publish_address {10.52.0.209:9200}, bound_addresses {0.0.0.0:9200} 
I 2020-09-04T08:27:36.328034230Z started 
I 2020-09-04T08:27:43.062612604Z no known master node, scheduling a retry 
I 2020-09-04T08:27:43.673099947Z no known master node, scheduling a retry 
I 2020-09-04T08:27:45.632385094Z no known master node, scheduling a retry 
I 2020-09-04T08:27:46.213327885Z master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 
I 2020-09-04T08:27:49.498405950Z no known master node, scheduling a retry 
I 2020-09-04T08:27:49.502384335Z no known master node, scheduling a retry 
I 2020-09-04T08:27:49.555091949Z no known master node, scheduling a retry 
I 2020-09-04T08:27:56.302758227Z master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305] from hosts providers and [{quickstart-es-default-0}{PNUG5jcCT4KQXrZ9U3VzBQ}{4_Q7ayTkSGepKFNNH_25QA}{10.52.0.209}{10.52.0.209:9300}{dilm}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0 

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.