Trying to get mlockall to work...
I see this in the log:
[2016-03-28 20:06:26,568][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-03-28 20:06:26,569][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-03-28 20:06:26,570][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-03-28 20:06:26,570][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2016-03-28 20:06:26,570][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-03-28 20:06:27,255][INFO ][node ] [arrow] version[2.2.1], pid, build[d045fc2/2016-03-09T09:38:54Z]
I've made the change as suggested to /etc/security/limits.conf and also moved the tmp dir but I still see the above warning and when I run:
jhoff909@eip-elk-es2:~$ curl 172.16.0.103:9200/_nodes/stats/process?pretty | grep mlock
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1140 100 1140 0 0 52600 0 --:--:-- --:--:-- --:--:-- 54285
I don't see the setting?
Also, when I look at the limits on the process, looks like my change to limits.conf didn't take effect (I have rebooted):
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 27836 27836 processes
Max open files 65535 65535 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 27836 27836 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
What am I missing?
I've only made this change on one node in my cluster this far - want to get it working there first...
Thanks in advance for the help.