ES runs in a docker container (debian using .deb installation). A cluster has dedicated master nodes and data nodes. The node keeps failing with an error:
[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2018-01-31T07:06:34,594][INFO ][o.e.n.Node ] [esnode] stopping ...
[2018-01-31T07:06:34,659][INFO ][o.e.n.Node ] [esnode] stopped
[2018-01-31T07:06:34,659][INFO ][o.e.n.Node ] [esnode] closing ...
[2018-01-31T07:06:34,667][INFO ][o.e.n.Node ] [esnode] closed
[2018-01-31T07:08:06,966][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2018-01-31T07:08:06,967][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-01-31T07:08:06,968][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-01-31T07:08:06,968][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
I set the following by setting the files inside the container and restarting the process via service elasticsearch restart.
We are using version 5.3.2 with debian based image. The documentation is for CentOS and suggests the following:
"The image offers several methods for configuring Elasticsearch settings with the conventional approach being to provide customized files, i.e. elasticsearch.yml, but it’s also possible to use environment variables to set options:"
I tried passing docker environment variable and observed the following JVM configs:
When I query GET _nodes?filter_path=**.mlockall, I get a false. No errors in the logs. We don't use docker compose, so let me know how can I set the following config
I tried that option as well. Verified that ulimit and memory_lock are set via docker inspect <container_name> and docker exec <container_name> env. Both were set as expected.
But when I verify through curl localhost:9200/_nodes?filter_path=**.mlockall. I get false for all. No errors in the log as well. Looks like the environment variable and the ulimit option is ignored.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.