Cannot allocate memory error, when I try to get the Elasticsearch version (v 5.6.8)

Hello everyone,

I am in the process of configuring an Elasticsearch environment. After set up the parameters according to the documentation, I get an error message when I try to get the version, even though there is free memory at the OS level:


[root@ip-xx-xxx-xx-xx ~]# free -m (total memory: 8G, free memory: 2.8 G)
total used free shared buffers cached
Mem: 7979 5168 2810 0 66 384
-/+ buffers/cache: 4717 3261
Swap: 0 0 0

[root@ip-xx-xxx-xx-xx ~]# /usr/share/elasticsearch/bin/elasticsearch --version

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006ca660000, 4120510464, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 4120510464 bytes for committing reserved memory.
An error report file with more information is saved as:
/root/hs_err_pid11983.log
Next my configuration:

[root@ip-xx-xxx-xx-xx ~]# uname -a (Amazon Linux):
Linux ip-xx-xxx-xx-xxx 4.14.33-51.34.amzn1.x86_64 #1 SMP Fri Apr 13 18:18:26 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Elasticsearch version: Version: 5.6.8, Build: 688ecce/2018-02-16T16:46:30.010Z, JVM: 1.8.0_171

[root@ip-xx-xxx-xx-xx ~]# cat /usr/lib/systemd/system/elasticsearch.service.d/elasticsearch.conf
[Service]
LimitMEMLOCK=infinity

[root@ip-xx-xxx-xx-xx ~]# cat /etc/elasticsearch/jvm.options | grep Xm
-Xms4g
-Xmx4g

[root@ip-xx-xxx-xx-xx ~]# cat /etc/security/limits.d/00-defaults.conf

soft nofile 100000
hard nofile 100000
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[root@ip-xx-xxx-xx-xx ~]# cat /etc/sysctl.d/00-defaults.conf
vm.swappiness=1
vm.max_map_count=262144
..
..

[root@ip-xx-xxx-xx-xx ~]# cat /etc/elasticsearch/elasticsearch.yml | grep bootstrap.memory
bootstrap.memory_lock: true

[root@ip-xx-xxx-xx-xx ~]# ps -ef | grep -i java (the service starts OK "/sbin/service elasticsearch start", however I got the error shown at the begining)

496 8328 1 2 21:56 ? 00:00:31 /usr/bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX: +UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -
Djdk.io.permissionsUseCanonicalPath=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -
Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Xms4g -Xmx4g -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid -d -Edefault.path.logs=/var/log/elasticsearch -Edefault.path.data=/var/lib/elasticsearch -Edefault.path.conf=/etc/elasticsearch

If I try to get the Elasticsearch version with 2G (intead of 4g) for Java Heap size, I get the version smoothly:

ES_JAVA_OPTS="-Xms2g -Xmx2g" /usr/share/elasticsearch/bin/elasticsearch --version
Version: 5.6.8, Build: 688ecce/2018-02-16T16:46:30.010Z, JVM: 1.8.0_171

However, if I try with 4G (which is the actually Heap Size of the JVM), I get the error message:

ES_JAVA_OPTS="-Xms4g -Xmx4g" /usr/share/elasticsearch/bin/elasticsearch --version


OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006ca660000, 4120510464, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 4120510464 bytes for committing reserved memory.
An error report file with more information is saved as:
/var/log/elasticsearch/hs_err_pid18376.log
free -m
total used free shared buffers cached
Mem: 7979 5115 2863 0 58 343
-/+ buffers/cache: 4713 3265
Swap: 0 0 0

So, the question is, why I get this error, considering there is enough free memory and, more important, the Heap size configured is actually 4g (not 2g) ?

thanks in advance !

When you have a an ES instance running that uses 4G mem, your OS uses mem as well with as a result you have 2.8G free. When do elasticsearch --version it actually starts a new JVM with 4G mem and that's not available (that is also why the ES_JAVA_OPTS="-Xms2g -Xmx2g" is working)

Thanks @pjanzen. It makes sense.

Since we need to create several ES instances (AWS AMI), I was thinking in creating a simple formula to configuring the Java Heap Size. Initially my formula was 50% of the total RAM memory.

So, if for example we had a AWS EC2 instance type with 8 Gb, the Java Heap will would have 4g (total memory * 1/2). However, I believe that this formula will not work because of the error I described earlier.

Therefore, I think an optimal formula could be this: (TotalRamemory * 1/4) + ((TotalRamemory * 1/4) * 0.5) or, which is the same, TotalRamMemory * 3/8.

In that way, if I have an AWS EC2 instance type with 8Gb of RAM, the Java Heap would be sized with 3 Gb. On the other hand, if my AWS EC2 Instance type has 32 Gb, the Java Heap would be 12 Gb.

The servers where the EC instance will be running, will not share resources with other applications (those servers will be exclusively for EC instances).

Keep in mind that the idea is not to take 50% of RAM for Java Heap, but a little less than 50% instead.

What do you think about the formula to size the Java Heap ?

Thanks a lot!

I have never put multiple ES instances on the same host so I cannot say this from experience. That said your calculations make sence I guess. Is there a particular reason to have multiple instances on the same box? would that not have the same effect as multiple aws instances? Mind you I am unfamiliar with aws and how that works. I work for a large isp with its own vm / container resources so hosts are not a issue for me.

Thanks @pjanzen for your reply.

The boxes that I need to create will not have multiples ES instances; each box will have ONLY ONE ES instance.

What I meant with "create several ES instances (AWS AMI)", was to create several AWS boxes, and the same number of ES instances (each box will have only one ES instance).

That's why I think that assigning TOTALRAM_Memory * 3/8 to each size of Java Heap could be of a reasonable size.

With respect to AWS and how it works, that is not very different from what you probably already know about other servers or virtualized environments. Basically, these servers work just like other servers; the main difference is in the way of provisioning them, but once created, the operation is the same as in other cases.

Thanks again for you time and valuable input,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.