Hi,
I am running into an issue with a new setup and hope someone can point me in the right direction.
OS is Ubuntu 22.04 LTE. ES and Kibana are version 8.14.3
For now I am starting with everything on a single VM. After the initial setup I had both Elasticsearch and Kibana running.
Before adding a Fleet Server and agents, I added a seperate disk to store data and logs. After changing the data and log paths in the elasticsearch.yml file Elasticsearch is still running fine. Kibana is running too, but all I get is that the Kibana server is not ready yet.
Looking at the Kibana logs I get the impression this is certificate related, but I am not sure and do not understand why. The only change made was the paths for data and logs and nothing else. Before that change Kibana was able to communicate with Elasticsearch just fine. CA Certificate fingerprint in kibana.yml matches the ca certificate in ES, that did not change.
Any ideas what I am missing?
Hello Niels,
With this description we cannot really help you. Please provide us with the error message, other relevant logs, ...
Best regards
Wolfram
This is the error I see in the kibana.log
"Unable to retrieve version information from Elasticsearch nodes. security_exception\n\tRoot causes:\n\t\tsecurity_exception: failed to authenticate service account [elastic/kibana] with token name [enroll-process-token-1720021188528]","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":9219,"uptime":19.67761492},"trace":{"id":"ffba4f74d09b2078c72df91ca0f9fd01"},"transaction":{"id":"c185e06ce470ab38"}}
I can provide more obviously, but all else seems fine and I am not sure what to provide. ES and Kibana services are running. curl command to ES gives the expected answer and looks fine too.
From the error, it seems that Kibana fails to authenticate with an API token. Those token are stored either in an index or a file (don't know right now). As you said that you moved the data files, I would guess that this is the reason...
Sounds like it.
I checked the installation for existing tokens using bin/elasticsearch-service-tokens list
The answer was empty. I guess you are right. With the change of the data and log path, the existing tokens stopped working.
I created a new token for kibana and changed the kibana.yml file accoringly.
Unfortunately that did not change much, just the token used and token name. New error now is :
"Unable to retrieve version information from Elasticsearch nodes. security_exception\n\tRoot causes:\n\t\tsecurity_exception: failed to authenticate service account [elastic/kibana] with token name [kibana-token]","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":12190,"uptime":20.159175801},"trace":{"id":"ffba4f74d09b2078c72df91ca0f9fd01"},"transaction":{"id":"c185e06ce470ab38"}}
This token is listed when I check for existing tokens again. Is there anywhere else I can look to determine why this authentication fails?
I am not sure if this works, the tokens used by kibana belong - as far as I know - to service accounts and not normal users. I am not sure if you can create new tokens for them.
Do you still have the old storage? Could you try to configure Elasticsearch to use the old storage if if works there? If this works, you could try to copy the data over to the new storage.
Also, you said that you are using a new setup. Maybe it is less effort to just setup the cluster from scratch than looking for the error and trying to make it work again.
It works..
Turns out a new servcie_tokens file was created, but elasticsearch did not have read/write access to the file. After correcting that, Kibana can now authenticate with the new token I created.
Thanks heaps for thinking along with me
1 Like