Hello! We are using Elasticsearch and Kibana on Kubernetes deployed via Helm charts and recently we occasionaly deleted all the data from persistent volumes that our two nodes Elasticsearch cluster uses. Since then we cannot enter into Kibana which says "Kibana server is not ready yet." and in the Kibana log it shows error [ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception: [index_not_found_exception] Reason: no such index [.security]
. It made sense after deletion of all indices from the volumes but the problem is that even after we reinstalled the Elasticsearch it still doesn't create .security index even though it shows in the log that Security is enabled
. API call GET /_cat/indices?expand_wildcards=all
shows just this green open .geoip_databases FdIHiGqgQyCHa66t9BL34g 1 1 42 0 79.9mb 39.9mb
. What can be the reason that prevents Elasticsearch from creating .security index?
The security index is only created when there is something to write to it.
It's perfectly normal for a brand new cluster to have no security index until you access one of the security APIs (e.g. to create a user, reset a password, etc).
Thank you for the answer. But in this case does the Kibana's request suppose to create .security index?
I don't think so, If I'm not wrong kibana will first try to read the index, since it does not exists it will give you this error.
When you reinstalled Elasticsearch did you run the steps to create the internal users again? How did you reinstalled it?
So at what moment does the Elasticsearch create .security index then?
I reinstalled it deleting the Elasticsearch Helm chart and installing this chart again. I didn't run any addtitional steps to create the internal users.
But the problem seems to be solved after I also reinstalled the Kibana Helm chart. It didn't work smoothly and I had to clean some things manually but in the end it worked. Looks like the problem was on
the side of the Kibana not being able to securely communicate with the Elasticsearch because of the problem with the token. It looks so for me because during the deinstallation I saw an error of the post deletion job with the text: Cleaning the Kibana Elasticsearch token Cleaning token statusCode: 404 {"found":false}
As I understood as the original .security index was deleted, the token that the Kibana used wasn't actually valid anymore. The strange part that the was no error logs saying about that.
Is there any documentation on the way Kibana communicates with Elasticsearch using tokens especially when deployed on Kubernetes? I really would like to understand this process better.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.