Can't lock memory on ES cluster

ES runs in a docker container (debian using .deb installation). A cluster has dedicated master nodes and data nodes. The node keeps failing with an error:

[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2018-01-31T07:06:34,594][INFO ][o.e.n.Node               ] [esnode] stopping ...
[2018-01-31T07:06:34,659][INFO ][o.e.n.Node               ] [esnode] stopped
[2018-01-31T07:06:34,659][INFO ][o.e.n.Node               ] [esnode] closing ...
[2018-01-31T07:06:34,667][INFO ][o.e.n.Node               ] [esnode] closed
[2018-01-31T07:08:06,966][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2018-01-31T07:08:06,967][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2018-01-31T07:08:06,968][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-01-31T07:08:06,968][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
	# allow user 'elasticsearch' mlockall
	elasticsearch soft memlock unlimited
	elasticsearch hard memlock unlimited

I set the following by setting the files inside the container and restarting the process via service elasticsearch restart.

/etc/elasticsearch/elasticsearch.yml
bootstrap.memory_lock: true

/etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

/etc/default/elasticsearch
MAX_LOCKED_MEMORY=unlimited

I followed this documentation here, we use version 5.3.2. Nothing seem to work. Please let me know what am I missing.

hey,

have you seen the Install Elasticsearch with Docker mentioning the additional configurations that might need to be taken?

--Alex

@spinscale Thank you.

We are using version 5.3.2 with debian based image. The documentation is for CentOS and suggests the following:

"The image offers several methods for configuring Elasticsearch settings with the conventional approach being to provide customized files, i.e. elasticsearch.yml, but it’s also possible to use environment variables to set options:"

I tried passing docker environment variable and observed the following JVM configs:

"JVM arguments [-Xms7g, -Xmx7g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms7g, -Xmx7g, -Des.path.home=/usr/share/elasticsearch]"

When I query GET _nodes?filter_path=**.mlockall, I get a false. No errors in the logs. We don't use docker compose, so let me know how can I set the following config

--cap-add=IPC_LOCK --ulimit memlock=-1:-1 --ulimit nofile=65536:65536

docker run supports the ulimit parameter, that is also mentioned in the documentation, along with a command to check for ulimit. Have you tried those?

I tried that option as well. Verified that ulimit and memory_lock are set via docker inspect <container_name> and docker exec <container_name> env. Both were set as expected.

But when I verify through curl localhost:9200/_nodes?filter_path=**.mlockall. I get false for all. No errors in the log as well. Looks like the environment variable and the ulimit option is ignored.

{
  "nodes": {
    "node1": {
      "process": {
        "mlockall": false
      }
    },
    "node2": {
      "process": {
        "mlockall": false
      }
    },
    "node3": {
      "process": {
        "mlockall": false
      }
    },
    "node4": {
      "process": {
        "mlockall": false
      }
    },
    "node5": {
      "process": {
        "mlockall": false
      }
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.