Error: Kibana server is not ready yet

Hello. I'm running Elasticsearch and Kibana via docker containers, whose images I'm building from the Dockerfiles from this repository: GitHub - elastic/dockerfiles: Dockerfiles for the official Elastic Stack images. When I run the containers as is, everything is fine.

However, I'm trying to configure both Elasticsearch and Kibana in a way that bypasses the need for an enrollment token on initial startup. I posted a topic a few days ago (Disable enrolment-token requirement on initial startup of Elasticsearch and Kibana) and that discussion led me to believe that I need to set a system username and password for Kibana. I thus, modified the kibana.yml file, which now looks like this:

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: "kibana"

server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"

But now when I run my containers and try to access Kibana in the browser, I get the error message, "Kibana server is not ready yet." Any idea why this is happening and what to do about it? Also, any clues on the correct configuration for bypassing the enrollment-token requirement?

Hi @Matt_Johnston

Kibana in Elasticsearch are running in two separate containers, correct?

You will probably need to set this see here

elasticsearch.hosts: [ "http://host.docker.internal:9200" ]

Also, is security enabled on elasticsearch? If so, Kibana should be connecting to elasticsearch via HTTPS.

And you will also either need to copy the CA over or set the SSL verification mode to none.

Also you should be able to see the Kibana logs... And it should provide error messages why it's not connecting.

Hi Stephen.

Yes, both Kibana and Elasticsearch are running in separate containers. FYI, here is the tailend of the logs that Kibana is spitting out:

[2023-03-11T20:32:07.805+00:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: 10179b96-1984-42fc-ac0f-f5948468ca08
[2023-03-11T20:32:07.885+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.886+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2023-03-11T20:32:07.909+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.910+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2023-03-11T20:32:07.921+00:00][WARN ][plugins.encryptedSavedObjects] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.931+00:00][WARN ][plugins.actions] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.938+00:00][INFO ][plugins.notifications] Email Service Error: Email connector not specified.
[2023-03-11T20:32:08.021+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:08.023+00:00][WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[2023-03-11T20:32:08.028+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:08.081+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
[2023-03-11T20:32:08.122+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
[2023-03-11T20:32:08.804+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
[2023-03-11T20:32:08.877+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticsearch
[2023-03-11T20:32:09.730+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell

I did try using elasticsearch.hosts: [ "http://host.docker.internal:9200" ] before (and I also tried for https).

I do believe that security is enabled on Elasticsearch by default. This is what my elasticsearch.yml file looks like

cluster.name: "docker-cluster"
network.host: 0.0.0.0

I'm not sure how to copy the CA over. Do you have instructions on that?

I did try setting the ssl.verification_mode to none in my elasticsearch.yml file, but it caused the Elasticsearch container to exit. Here is the log it spat out before it exited.

{"@timestamp":"2023-03-11T20:22:42.784Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"486da28cafb8","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"unknown setting [ssl.verification_mode] did you mean [reindex.ssl.verification_mode]?","error.stack_trace":"java.lang.IllegalArgumentException: unknown setting [ssl.verification_mode] did you mean [reindex.ssl.verification_mode]?\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:561)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:507)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:477)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:447)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:151)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:56)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:472)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:322)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log

ERROR: Elasticsearch exited unexpectedly

Nope that goes in the Kibana Setting since Kibana is connecting to elasticsearch
Note the syntax for the elasticsearch.ssl.verificationMode is different

Try this in the kibana settings.

#server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "https://host.docker.internal:9200" ]
elasticsearch.ssl.verificationMode: none
#monitoring.ui.container.elasticsearch.enabled: true

Nope don't do that...

Verification is about when a Client (in this case Kibana) connects to a Server (in this case Elasticsearch) via HTTPS ... and to be more precise there are many types of validation but the former is what we are concerned with here.

Okay, so here is what my kibana.yml file looks like now:

#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: "kibana"

server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "https://host.docker.internal:9200" ]
elasticsearch.ssl.verificationMode: none
# monitoring.ui.container.elasticsearch.enabled: true

elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"

And here is my elasticsearch.yml file:

cluster.name: "docker-cluster"
network.host: 0.0.0.0

But I'm still getting the error message "Kibana server is not ready yet." in the browser. Also, the Elasticsearch container logs are spitting out this:

{"@timestamp":"2023-03-11T21:15:04.640Z", "log.level": "INFO", "message":"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[c920f4467cd8][system_critical_read][T#3]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","trace.id":"ccafb99ca52e6a2ac9841b1a70ff1178","elasticsearch.cluster.uuid":"2QzxiqfsTKmduPim9vgYBg","elasticsearch.node.id":"fLHsFhQfSOSZwdfr-uYx9A","elasticsearch.node.name":"c920f4467cd8","elasticsearch.cluster.name":"docker-cluster"}

The log message is saying something about failing to authenticate user [kibana_system].

Is this correct? What did you set it too?

It's failing authentication....

If you did not setup the kbana_system user than this will fail... Did you set it up?

Try this from your host command line

curl -k -u kibana_system:password https://localhost:9200

Show the results, if it shows authentication error then you either did not set it up or have the wrong password.

If you did not set this up you are going to need to exec into the elasticsearch container

You will need to exec into the docker container and set the kibana_system password

Get the docker containers

$ docker ps
CONTAINER ID   IMAGE                                                 COMMAND                  CREATED        STATUS                  PORTS                              NAMES
b507697c77d0   docker.elastic.co/kibana/kibana:8.6.2                 "/bin/tini -- /usr/l…"   31 hours ago   Up 31 hours (healthy)   0.0.0.0:5601->5601/tcp             test-kibana-1
dd55f239d187   docker.elastic.co/elasticsearch/elasticsearch:8.6.2   "/bin/tini -- /usr/l…"   31 hours ago   Up 31 hours (healthy)   0.0.0.0:9200->9200/tcp, 9300/tcp   test-es01-1~

Exec into the elasticsearch container get to the right directory and reset the kibana_system password.

$ docker exec -it dd55f239d187  /bin/bash
$ cd /usr/share/elasticsearch
$ ./bin/elasticsearch-reset-password -u kibana_system --url https://localhost:9200

Then set the correct password on the kibana.yml compose and try again.

Okay, this worked for me. I guess what I'm ultimately wondering is whether or not it's possible to pre-configure the kibana_system user password in Elasticsearch. But maybe I'll post a new topic for that. Thanks for all your help!

No you can not pre-configure kibana_system password you should set it right after you start elasticsearch.

If you look at The full docker example shows here

these lines are just doing that

        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;

Otherwise, you would use the enrollment token which is the other way...

It is either use the enrollment token or setup the kibana_system password

1 Like

Okay, that is very helpful to know. Thank you.

I did suspect that that is what those lines were doing from that docker-compose.yml file.

What about the ELASTIC_PASSWORD being used in those lines? Is it possible to pre-configure that? I only see it being set as an environment variable for the es01 node (ELASTIC_PASSWORD=${ELASTIC_PASSWORD}).

I'm not sure what exactly you mean by pre-configure.... The passwords and authentication are stored within special elastic indices inside elasticsearch ... There is no outside location they live.

The line you are referring to actually is a method to set the elastic user password on the very initial startup of elasticsearch. That sets it from there forward and for the entire cluster.

If you don't do that, a password will be generated for you and should be shown on the console.

Or you could use the command line tool just like what we did for the kibana_system and reset the elastic user password afterwards.

That's kind of a high level summary how to approach it.

There's quite a bit of detail on built-in users password management in the documents.... caution Don't get confused by what's known as the bootstrap password...

I guess what I mean is that since I'm not using Docker Compose but separately spinning up Elasticsearch and Kibana containers, can I set a configuration in the elasticsearch.yml file that essentially does the same thing as this environment variable ELASTIC_PASSWORD=${ELASTIC_PASSWORD} in the docker-compose.yml file?

No sorry It does not work that way...

You can specify it as part of your docker run command as an environment variable passed in with the -E flag I believe

Okay, good to know. Also, it did work using the -e flag.

Back to the kibana_system user password. I regenerated a new password by exec-ing into the Elasticsearch docker container using this command ./bin/elasticsearch-reset-password -u kibana_system --url https://localhost:9200. Just wondering if it's possible to also set the kibana_system user password by issuing a POST request similar to how it's being done with Docker Compose in the line

until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;

But if so, I'm not sure how I would need to modify this curl request for my specific case of separately running individual Elasticsearch and Kibana containers.

Yup should work once you have the elastic super user you can use the API.

My best suggestion is just try all the different ways you like and pick the one you like... It's just a docker container and you can break it and start over :slight_smile:

Part of the back and forth is I'm not actually sure completely what you're trying to accomplish so I feel like I'm feeling around the elephant with a blindfold on.

Depending how much you want to learn... The API an excellent area to learn.. there are a few things it can not do.. but most things can be accomplished via API ... Those that can't you can refer to the command line tools.

Cool. Yeah, I managed to figure it out—I sent the POST request using curl and it updated the password for me. However, I had to do it using the -k flag, which I guess disables SSL/TLS verification.

I guess my next step is trying to figure out how to do it with SSL/TLS enabled. But I think that means I'll need to figure out what's going on with the issuing/creating of certificates in that docker-compose.yml file. Any reading you could point me towards that would help sort out how to handle the certificates?

On what I'm trying to accomplish, I'm working on a school project where I'm trying to setup an automatic deployment of the ELK stack with docker containers on an AWS ECS cluster using AWS CDK.

Well I think you need to learn about docker volumes

Then you can look at the elasticsearch command line to to figure out what that cert command does.

You can exec into the elasticsearch container and run the POST command it will be secure.

Or copy the CA out of the container and use it locally...
Which you will need Anyways, if you're going to use filebeat or logstash or any other ingest tool that will use the SSL connection.

Okay, got it. Thanks.

Whereabouts would I find the CA in the container? And how would I copy the CA out of the container?

Hmmm good homework problem... Pretty much spelled out in the docker compose... Exec in and take a look...

Okay, I'll do some digging. Thank you!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.