I'm having some problems on my docker cluster, today I've decided to change the config regarding the elastsicsearch services and after the configuration when I try to deploy the stack it gives the error bellow.
elastic_elasticsearch01.1.rhik0bu8ex0f@docker-manager | ERROR: [1] bootstrap checks failed
elastic_elasticsearch01.1.rhik0bu8ex0f@docker-manager | [1]: memory locking requested for elasticsearch process but memory is not locked
elastic_elasticsearch01.1.rhik0bu8ex0f@docker-manager | ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elastic-docker-cluster.log
Elasticsearch requests to lock the memory (bootstrap.memory_lock=true) but probably didn't manage to do it.
There are 2 possible reasons it didn't manage to lock the memory:
you do not have enough RAM to lock 2GB of heap for each node on the machine
the process do not have the rights to lock too much memory
If you open the logs as suggested by the error message,you'll probably find:
Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
This can result in part of the JVM being swapped out.
Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
On docker is even easier. Just add the following lines to your configuration (on each Elasticsearch):
The option below when I try to run the stack says it's deprecated.
I removed the option bootstrap.memory_lock=true but now it gives me this error.
node.name": "elasticsearch2", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [elasticsearch1, elasticsearch2] to bootstrap a cluster: have discovered [{elasticsearch2}{DmvY5NxNSK-9Lz4_tlFUdA}{CnB4qkAgSoGjlFRaonRZww}{10.0.0.9}{10.0.0.9:9300}{dilm}{ml.machine_memory=8366452736, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [10.0.0.5:9300] from hosts providers and [{elasticsearch2}{DmvY5NxNSK-9Lz4_tlFUdA}{CnB4qkAgSoGjlFRaonRZww}{10.0.0.9}{10.0.0.9:9300}{dilm}{ml.machine_memory=8366452736, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
I'm using another host as a worker in swarm mode, i don't know if it as something to do about that.
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
cl8hexf88909sy9t253z2wl46 * docker-manager Ready Active Leader 18.09.9
lg1el3f7xyng2v01xt788whvq photon-machine Ready Active 18.09.9
Kibana logs:
"warning","elasticsearch","admin"],"pid":6,"message":"No living connections"
In Docker versions prior to 18.09, containerd was managed by the Docker engine daemon. In Docker Engine 18.09, containerd is managed by systemd. Since containerd is managed by systemd, any custom configuration to the docker.service systemd configuration which changes mount settings (for example, MountFlags=slave) breaks interactions between the Docker Engine daemon and containerd, and you will not be able to start containers.
If you've configured swappiness to 1, it is fine then.
Your elasticsearch02 is missing exposing the ports 9200 and 9300.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.