Invalid initial heap size

Hey guys,

I have a machine running elasticsearch as a service, it initially had 3GB of RAM available, which worked, but now I could increase the RAM and get Elasticsearch 9GB of RAM, but I always get the error invalid initial heap size.

How can I fix this error or what is the problem?

Best regards

1 Like

Because of the physical size of your machine, it is not recommended to assign more than half of the RAM to elasticsearch.

My Physical RAM is 12 GB so as I know I can assign 6 GB but elasticsearch throws the exception each time I re-set the RAM to a higher number than 3, it throws an error by 4, 5, 6 ... 9GB

Effectively, if your physical memory is 12GB, you can not allocate more than 6GB (limit 6GB) elasticsearch

Yes but 4, 5 & 6 GB of RAM are throwing this error too

Oh Ok !
Can I have a little more visibility regarding the error?

What infomation do you need?

It sounds as if you may have two different values for the initial and maximum JVM heap size and that is not allowed. Both must have the same value, in your old case 3 GB and in the new case 6 GB.

There are several ways to specify the initial and maximum heap size, it can be done through the ES_JAVA_OPTS environment variable but more commonly it's done in the jvm.options file which should be found in your Elasticsearch installation.

For instance, if you want a 6 GB heap size make sure to set

-Xms6g
-Xmx6g

in your elasticsearch-6.x.x/config/jvm.options file and then restart the service. This should set both initial and maximum JVM heap size to 6 GB.

Thanks for your reply @Bernt_Rostad

I set the initial and maximum heap size exactly as you said but the service entered the failed state and exit without an error in journal.

1 Like

@Charaf_Ahmed @Bernt_Rostad
Here is the throwed error message:

Aug 09 15:55:12 elastic elasticsearch[7434]: Invalid initial heap size: -Xms6g
Aug 09 15:55:12 elastic elasticsearch[7434]: The specified size exceeds the maximum representable size.

This error message clearly says that the server you are running Elasticsearch on doesn't have sufficient memory for you to allocate 6 GB for the JVM heap. Or are you running on a virtual server or in a container with a more limited memory than the physical hardware offers?

Whatever reason, you'll either need to add more memory to the server or keep using 3 GB as JVM Heap size.

Hey @Bernt_Rostad,

When i run free -h I can see that 1GB should be availlable but this throws an error, here the output of the command:

total      used        free        shared      buff/cache  available
11G        4.7G        1.7G        634M        5.2G        5.9G

Looks like only 5.9G is available for starting Elasticsearch, so you probably need to add more memory to the server to start up Elasticsearch with a JVM Heap size of 6G.

Oh sry, the machine runs already elasticsearch in the result above (with 3 GB RAM), so there is enough RAM to use.

On my servers the value of available reported by free -h is the fixed value of total memory available for starting a new application so I believe your value of 5.9G is the maximum too - it certainly fits the error message you got when trying to start Elasticsearch - hence you cannot start an application using more than 5.9G.

I think you dont understand me :slight_smile:

On my machine elasticsearch runs with 3 GB of RAM, as I runned free -h, elasticsearch was already running with 3 GB RAM so at the time of running free -h, 3GB of RAM was already reserved for elastic, so 5,9GB of RAM are free to add to the 3 GB RAM of elasticsearch which is already running.

Because elasticsearch can just run with the half of the installed RAM on the machine it can just use 6GB RAM from 12GB installed RAM, so I can add 3 GB RAM to elasticsearch (which is now running with 3GB RAM), so 2,9GB RAM are totally free after adding it to the jvm.

Best regards,
Robert

Hello,

Just to clarify the free -m/h command I am going to quote this here:

  • total : Your total (physical) RAM (excluding a small bit that the kernel permanently reserves for itself at startup); that's why it shows ca. 11.7 GiB , and not 12 GiB, which you probably have.
  • used : memory in use by the OS.
  • free : memory not in use.

total = used + free

  • shared / buffers / cached : This shows memory usage for specific purposes, these values are included in the value for used .

Can you please tell me if this is a bare-metal server or a VM of a sort (KVM/Vmware/docker/etc.)?

Also, it would be helpful if you can stop the Elasticsearch then show us free -m. I want to see what is available or free when nothing is running on your system.

This is my free -h just as an example:

-Xms30g
-Xmx30g

              total        used        free      shared  buff/cache   available
Mem:            94G         32G        753M         72M         61G         61G
Swap:            0B          0B          0B

The vm is running with vmware and free -h shows me the following

              total        used        free      shared  buff/cache   available
Mem:            11G        5.1G        138M        606M        6.3G        5.5G
Swap:          8.0G        3.8M        8.0G

@maziyar any ideas?

Sorry I missed your reply. is it possible to stop the elasticsearch, do a full reboot without elasticsearch and see the free -h?

My best guest is that if ES has 3G and total used is 5.1GB the you have about 2GB memory that is being used by the system itself. So you have to subtract that from 11GB and you will left out with 9/2 = 4.5GB. Have you try this? What is the maximum heap memory you can have by try and fail? (like start with 3.5G and see where it crashes)
Also, it could be due to the Host, sometimes you give the VM memory that the host doesn't have it. Something like vm ballooning where you don't expect all the VMs using all their CPU and memory so you give away x2 than you have available.

But, starting with clean system after boot with no ES running and see the free -h can help. Adding this result to what is the highest heap memory before it crashes will be the answer.