Thanks for the fast reply and sry for not providing it. I thought its the default error when the discovery type is not single-node
elasticsearch-1 | {"@timestamp":"2025-11-20T16:21:00.046Z", "log.level":"ERROR", "message":"node validation exception\n[1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch. For more information see [``https://www.elastic.co/docs/deploy-manage/deploy/self-managed/bootstrap-checks?version=9.2``]\nbootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured; for more information see [``https://www.elastic.co/docs/deploy-manage/deploy/self-managed/bootstrap-checks?version=9.2#bootstrap-checks-discovery-configuration``]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"909025164b3c","elasticsearch.cluster.name":"docker-cluster"}
Thanks, I'd still recommend making this more obvious in the docker documentation. There is an enormous amount of documentation and literally anyone trying out Elasticsearch via docker is going to run into this.
I think I am one of them.
I understand that using Elasticsearch as a single node is discouraged although I don't get if it would be bad even if the scale of the application is small (RAG App with not that many users as only company employes have access).
My goal for now is to get a proof of concept working locally. I just want to consume the API.
We currently plan to deploy it via Azure but maybe we might self host it. Because the node would not be accessable from the outside I might consider not setting up auth.
From what you wrote, this appears to be your almost precisely your use case?
I just followed those docs:
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:9.2.1
$ docker network create elastic
$ docker run --name es01 --net elastic -p 9200:9200 -it -m 1GB \
docker.elastic.co/elasticsearch/elasticsearch:9.2.1
< note the password that is printed to the screen during startup, in middle of all other startup logs>
in another window
$ curl -s -k -u elastic https://localhost:9200
< give the password you saw above >
EDIT:
Actually, for me, the above only worked as I did not try to reset the password. If I did try to reset the password, the new es01 container crashed. If I bumped memory to 2GB, it was fine. I reduced to 1500mb, and it was fine. I reduced to 1200mb, and it still crashed. The blow is with 1500mb. Maybe 1gb is just too low for these latest releases.
$ docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
WARNING: Owner of file [/usr/share/elasticsearch/config/users] used to be [root], but now is [elasticsearch]
WARNING: Owner of file [/usr/share/elasticsearch/config/users_roles] used to be [root], but now is [elasticsearch]
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: ucCsgwVnG*8SD-4yM0P4
$
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.