Bootstrap checks failed due to memory is not locked

I’m running elasticsearch image (buiding on centOS 7) on k8s and have the "bootstrap.memory_lock: true" in elasticsearch.yml file. I added the following lines in /etc/security/limits.conf file
esuser soft memlock unlimited esuser hard memlock unlimited
and session required pam_limits.so in /etc/pam.d/login file into es docker image which building on centOS 7.

When I try to run the image in k8s, it failed with the following messages:

[2017-06-13T09:50:30,527][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2017-06-13T09:50:30,529][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2017-06-13T09:50:30,529][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2017-06-13T09:50:30,529][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'esuser' mlockall
esuser soft memlock unlimited
esuser hard memlock unlimited
......
[2017-06-13T09:50:36,938][INFO ][o.e.b.BootstrapChecks ] [es-master-1451757423-7hgdz] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
memory locking requested for elasticsearch process but memory is not locked
[2017-06-13T09:50:36,950][INFO ][o.e.n.Node ] [es-master-1451757423-7hgdz] stopping ...
[2017-06-13T09:50:37,005][INFO ][o.e.n.Node ] [es-master-1451757423-7hgdz] stopped
[2017-06-13T09:50:37,005][INFO ][o.e.n.Node ] [es-master-1451757423-7hgdz] closing ...
[2017-06-13T09:50:37,025][INFO ][o.e.n.Node ] [es-master-1451757423-7hgdz] closed
`

PS: I can run the image in docker with docker run --limit memlock=-1:-1 ... command.

I seems that no proper way to set memlock to unlimited in deployment yaml file on k8s.
Any comment will be appreciated.

Do you solve it?i get this problem too ,finding way to solve it .......

Not yet. I have to set "bootstrap.memory_lock: false" and
a. Set vm.swappiness to 1. ( This setting will impact all pods on k8s nodes)
b. Set "resources" in yaml to restrict memory
c. Set ES_JAVA_OPTS = -Xms*** -Xmx*** to be half of memory defined in "resources".

my problem is solved ,i do like below :
a.in the rc file i add privileged: true
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
b.in the nodes i change the KUBE_ALLOW_PRIV="--allow-privileged=true"

then es can run with memory lock

The settings are what I was using. Unfortunately, they don't work for me.
Just see that the issue was discussed at https://github.com/kubernetes/kubernetes/issues/3595
The PR was merged at https://github.com/kubernetes-incubator/cri-o/pull/639

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.