However, I'm trying to configure both Elasticsearch and Kibana in a way that bypasses the need for an enrollment token on initial startup. I posted a topic a few days ago (Disable enrolment-token requirement on initial startup of Elasticsearch and Kibana) and that discussion led me to believe that I need to set a system username and password for Kibana. I thus, modified the kibana.yml file, which now looks like this:
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.name: "kibana"
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"
But now when I run my containers and try to access Kibana in the browser, I get the error message, "Kibana server is not ready yet." Any idea why this is happening and what to do about it? Also, any clues on the correct configuration for bypassing the enrollment-token requirement?
Yes, both Kibana and Elasticsearch are running in separate containers. FYI, here is the tailend of the logs that Kibana is spitting out:
[2023-03-11T20:32:07.805+00:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: 10179b96-1984-42fc-ac0f-f5948468ca08
[2023-03-11T20:32:07.885+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.886+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2023-03-11T20:32:07.909+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.910+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2023-03-11T20:32:07.921+00:00][WARN ][plugins.encryptedSavedObjects] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.931+00:00][WARN ][plugins.actions] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:07.938+00:00][INFO ][plugins.notifications] Email Service Error: Email connector not specified.
[2023-03-11T20:32:08.021+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:08.023+00:00][WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[2023-03-11T20:32:08.028+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2023-03-11T20:32:08.081+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
[2023-03-11T20:32:08.122+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
[2023-03-11T20:32:08.804+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
[2023-03-11T20:32:08.877+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticsearch
[2023-03-11T20:32:09.730+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
I did try using elasticsearch.hosts: [ "http://host.docker.internal:9200" ] before (and I also tried for https).
I do believe that security is enabled on Elasticsearch by default. This is what my elasticsearch.yml file looks like
I'm not sure how to copy the CA over. Do you have instructions on that?
I did try setting the ssl.verification_mode to none in my elasticsearch.yml file, but it caused the Elasticsearch container to exit. Here is the log it spat out before it exited.
{"@timestamp":"2023-03-11T20:22:42.784Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"486da28cafb8","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"unknown setting [ssl.verification_mode] did you mean [reindex.ssl.verification_mode]?","error.stack_trace":"java.lang.IllegalArgumentException: unknown setting [ssl.verification_mode] did you mean [reindex.ssl.verification_mode]?\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:561)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:507)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:477)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:447)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:151)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:56)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:472)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:322)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
ERROR: Elasticsearch exited unexpectedly
Nope that goes in the Kibana Setting since Kibana is connecting to elasticsearch
Note the syntax for the elasticsearch.ssl.verificationMode is different
Verification is about when a Client (in this case Kibana) connects to a Server (in this case Elasticsearch) via HTTPS ... and to be more precise there are many types of validation but the former is what we are concerned with here.
But I'm still getting the error message "Kibana server is not ready yet." in the browser. Also, the Elasticsearch container logs are spitting out this:
{"@timestamp":"2023-03-11T21:15:04.640Z", "log.level": "INFO", "message":"Authentication of [kibana_system] was terminated by realm [reserved] - failed to authenticate user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[c920f4467cd8][system_critical_read][T#3]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","trace.id":"ccafb99ca52e6a2ac9841b1a70ff1178","elasticsearch.cluster.uuid":"2QzxiqfsTKmduPim9vgYBg","elasticsearch.node.id":"fLHsFhQfSOSZwdfr-uYx9A","elasticsearch.node.name":"c920f4467cd8","elasticsearch.cluster.name":"docker-cluster"}
The log message is saying something about failing to authenticate user [kibana_system].
Okay, this worked for me. I guess what I'm ultimately wondering is whether or not it's possible to pre-configure the kibana_system user password in Elasticsearch. But maybe I'll post a new topic for that. Thanks for all your help!
I did suspect that that is what those lines were doing from that docker-compose.yml file.
What about the ELASTIC_PASSWORD being used in those lines? Is it possible to pre-configure that? I only see it being set as an environment variable for the es01 node (ELASTIC_PASSWORD=${ELASTIC_PASSWORD}).
I'm not sure what exactly you mean by pre-configure.... The passwords and authentication are stored within special elastic indices inside elasticsearch ... There is no outside location they live.
The line you are referring to actually is a method to set the elastic user password on the very initial startup of elasticsearch. That sets it from there forward and for the entire cluster.
If you don't do that, a password will be generated for you and should be shown on the console.
Or you could use the command line tool just like what we did for the kibana_system and reset the elastic user password afterwards.
That's kind of a high level summary how to approach it.
There's quite a bit of detail on built-in users password management in the documents.... caution Don't get confused by what's known as the bootstrap password...
I guess what I mean is that since I'm not using Docker Compose but separately spinning up Elasticsearch and Kibana containers, can I set a configuration in the elasticsearch.yml file that essentially does the same thing as this environment variable ELASTIC_PASSWORD=${ELASTIC_PASSWORD} in the docker-compose.yml file?
Okay, good to know. Also, it did work using the -e flag.
Back to the kibana_system user password. I regenerated a new password by exec-ing into the Elasticsearch docker container using this command ./bin/elasticsearch-reset-password -u kibana_system --url https://localhost:9200. Just wondering if it's possible to also set the kibana_system user password by issuing a POST request similar to how it's being done with Docker Compose in the line
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
But if so, I'm not sure how I would need to modify this curl request for my specific case of separately running individual Elasticsearch and Kibana containers.
Yup should work once you have the elastic super user you can use the API.
My best suggestion is just try all the different ways you like and pick the one you like... It's just a docker container and you can break it and start over
Part of the back and forth is I'm not actually sure completely what you're trying to accomplish so I feel like I'm feeling around the elephant with a blindfold on.
Depending how much you want to learn... The API an excellent area to learn.. there are a few things it can not do.. but most things can be accomplished via API ... Those that can't you can refer to the command line tools.
Cool. Yeah, I managed to figure it out—I sent the POST request using curl and it updated the password for me. However, I had to do it using the -k flag, which I guess disables SSL/TLS verification.
I guess my next step is trying to figure out how to do it with SSL/TLS enabled. But I think that means I'll need to figure out what's going on with the issuing/creating of certificates in that docker-compose.yml file. Any reading you could point me towards that would help sort out how to handle the certificates?
On what I'm trying to accomplish, I'm working on a school project where I'm trying to setup an automatic deployment of the ELK stack with docker containers on an AWS ECS cluster using AWS CDK.
Well I think you need to learn about docker volumes
Then you can look at the elasticsearch command line to to figure out what that cert command does.
You can exec into the elasticsearch container and run the POST command it will be secure.
Or copy the CA out of the container and use it locally...
Which you will need Anyways, if you're going to use filebeat or logstash or any other ingest tool that will use the SSL connection.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.