JVM Memory in 5.2.2

I am in the process of migrating to 5.2.2, my existing installation is 2.4.2. I am running on AWS and am working with a clean i3.large instance running the default Ubuntu (16.04.2 LTS). I have installed Elasticsearch 5.2.2 using Apt and the official repo as documented here. I have been working on this for several hours now, have trawled the various threads and support pages and just cannot get the bootstrap.memory_lock: true to work. If I comment this out it launches fine, as soon as I uncomment this I get an error and the launch fails. I have been running 1.x and 2.x in Production for 18 months and not had a problem with this. Any help or advice on what I might have missed would be hugely appreciated. Full details are below.

My Java version is as follows:

java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

/etc/environment:

JAVA_HOME="/usr/lib/jvm/java-8-oracle/jre"

/etc/default/elasticsearch:

...
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited

/etc/elasticsearch/jvm.options:

...
-Xms7g
-Xmx7g
-Djava.io.tmpdir=/data/tmp

/usr/lib/systemd/system/elasticsearch.service:

...
LimitMEMLOCK=infinity

/etc/security/limits.conf:

*               soft    memlock         unlimited
*               hard    memlock         unlimited
*               -       nofile          65536

(NOTE: I also tried elasticsearch in place of * as per the advice in the logs - I have also rebooted after each change)

/etc/elasticsearch/elasticsearch.yml:

network.bind_host: [_ec2_, _local_]
network.publish_host: _ec2_
transport.tcp.port: 9300
node.master: true
node.data: true
path.data: /data/index
path.logs: /data/logs
bootstrap.memory_lock: true
discovery.type: ec2

Running ulimit -a suggests that all is well:

/etc/security/limits.d$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 61104
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 61104
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

...but whatever I do I still get the following error in the logs....

[2017-03-07T12:05:23,868][WARN ][o.e.c.l.LogConfigurator  ] ignoring unsupported logging configuration file [/etc/elasticsearch/logging.yml], logging is configured via [/etc/elasticsearch/log4j2.properties]
[2017-03-07T12:05:23,952][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2017-03-07T12:05:23,952][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2017-03-07T12:05:23,952][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2017-03-07T12:05:23,952][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited
[2017-03-07T12:05:23,953][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2017-03-07T12:05:24,111][INFO ][o.e.n.Node               ] [production2-node1c] initializing ...
[2017-03-07T12:05:24,207][INFO ][o.e.e.NodeEnvironment    ] [production2-node1c] using [1] data paths, mounts [[/data (/dev/nvme0n1)]], net usable_space [413.1gb], net total_space [435.3gb], spins? [no], types [ext4]
[2017-03-07T12:05:24,207][INFO ][o.e.e.NodeEnvironment    ] [production2-node1c] heap size [6.9gb], compressed ordinary object pointers [true]
[2017-03-07T12:05:24,208][INFO ][o.e.n.Node               ] [production2-node1c] node name [production2-node1c], node ID [kww3BEYASFyTCWlzg2ti_g]
[2017-03-07T12:05:24,210][INFO ][o.e.n.Node               ] [production2-node1c] version[5.2.2], pid[1592], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/4.4.0-64-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]
...
[2017-03-07T12:05:26,342][INFO ][o.e.p.PluginsService     ] [production2-node1c] loaded plugin [discovery-ec2]
[2017-03-07T12:05:26,342][INFO ][o.e.p.PluginsService     ] [production2-node1c] loaded plugin [repository-s3]
[2017-03-07T12:05:29,130][INFO ][o.e.n.Node               ] [production2-node1c] initialized
[2017-03-07T12:05:29,130][INFO ][o.e.n.Node               ] [production2-node1c] starting ...
[2017-03-07T12:05:29,242][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 49:01:61:de:d8:bb:4d:96
[2017-03-07T12:05:29,324][INFO ][o.e.t.TransportService   ] [production2-node1c] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-03-07T12:05:29,338][WARN ][o.e.b.BootstrapChecks    ] [production2-node1c] memory locking requested for elasticsearch process but memory is not locked
[2017-03-07T12:05:32,428][INFO ][o.e.c.s.ClusterService   ] [production2-node1c] new_master {production2-node1c}{kww3BEYASFyTCWlzg2ti_g}{b0jn8ToZTPeFoeGYbrghQg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-03-07T12:05:32,455][INFO ][o.e.h.HttpServer         ] [production2-node1c] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-03-07T12:05:32,455][INFO ][o.e.n.Node               ] [production2-node1c] started
[2017-03-07T12:05:32,471][INFO ][o.e.g.GatewayService     ] [production2-node1c] recovered [0] indices into cluster_state

So I have investigated my existing 2.x clusters and it would appear that mlockall is not active on these either despite being configured. The problem is nothing to do with 5.x therefore but the way I have my AWS Ubuntu instances configured, it just happens that on 5.x bootstrap.memory_lock: true is stricter and my nodes don't start if the check does not pass. I have followed all of the guidance and the instructions in the manual but just cannot seem to get this to work. I don't know if this is an AWS specific thing or Ubuntu but any tips for resolving this will be much appreciated.

Have you gone through the following settings? The systemd configuration fixed it for me.

1 Like

I have this configuration in my limits.conf, only difference I noticed

elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft nproc 2048
elasticsearch hard nproc 2048
elasticsearch - memlock unlimited

Besides that don't forget to reload systemd configuration after change /usr/lib/systemd/system/elasticsearch.service

systemctl daemon-reload

Hope it helps.

Regards,

1 Like

Thanks Christian, I had previously gone through this but just did it again on a completely clean server and this time managed to get it to work. Could have also been then systemctl daemon-reload command in Rodrigo's post that forced the refresh. Thanks both for your help, very much appreciated!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.