There is insufficient memory for the Java Runtime Environment to continue

Hello. I need help. In our system, there are 8 data nodes. And my Elasticsearch VM RAM is 16GB.
Normally, Elasticsearch data nodes use 60-65% of RAM. I check via the command “top” on Ubuntu. Unfourtunately, once in a month one of these Elasticsearch service on the data nodes stop and error is below:
I check Grafana and Heap size is 6GB, when it is restarting. So 8GB(heap) + 4GB (max direct memory) + 1GB (OS - even 500MB) =13GB. 2GB should be empty.

vm.max_map_count in data node is 262144

Could you please help me if you face with this problem? Thank you.




There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 16384 bytes. Error detail: committing reserved memory.





Possible reasons:



The system is out of physical RAM or swap space

This process has exceeded the maximum number of memory mappings (check below

for /proc/sys/vm/max_map_count and Total number of mappings)

This process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap

Possible solutions:

Reduce memory load on the system

Increase physical memory or swap space

Check if swap backing store is full

Decrease Java heap size (-Xmx/-Xms)

Decrease number of Java threads

Decrease Java thread stack sizes (-Xss)

Set larger code cache with -XX:ReservedCodeCacheSize=

JVM is running with Zero Based Compressed Oops mode in which the Java heap is

placed in the first 32GB address space. The Java Heap base address is the

maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress

to set the Java Heap base and to place the Java Heap above 32GB virtual address.

This output file may be truncated or incomplete.



Out of Memory Error (os_linux.cpp:2936), pid=2406074, tid=3962668



JRE version: OpenJDK Runtime Environment (24.0+36) (build 24+36-3646)

Java VM: OpenJDK 64-Bit Server VM (24+36-3646, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)

Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -F%F -- %E" (or dumping to /usr/share/elasticsearch/core.2406074)



---------------  S U M M A R Y ------------

Command Line: -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j2.formatMsgNoLookups=true -Djava.locale.providers=CLDR -Dorg.apache.lucene.vectorization.upperJavaFeatureVersion=24 -Des.distribution.type=deb -Des.java.type=bundled JDK --enable-native-access=org.elasticsearch.nativeaccess,org.apache.lucene.core --enable-native-access=ALL-UNNAMED --illegal-native-access=deny -XX:ReplayDataFile=/var/log/elasticsearch/replay_pid%p.log -Des.entitlements.enabled=true -XX:+EnableDynamicAgentLoading -Djdk.attach.allowAttachSelf=true --patch-module=java.base=lib/entitlement-bridge/elasticsearch-entitlement-bridge-9.0.0.jar --add-exports=java.base/org.elasticsearch.entitlement.bridge=org.elasticsearch.entitlement,java.logging,java.net.http,java.naming,jdk.net -Xms8g -Xmx8g -Djava.io.tmpdir=/tmp/elasticsearch-15812986021906771924 -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:MaxDirectMemorySize=4294967296 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=25 --module-path=/usr/share/elasticsearch/lib --add-modules=jdk.net --add-modules=jdk.management.agent --add-modules=ALL-MODULE-PATH -Djdk.module.main=org.elasticsearch.server org.elasticsearch.server/org.elasticsearch.bootstrap.Elasticsearch

Host: INTEL(R) XEON(R) GOLD 6530, 8 cores, 15G, Ubuntu 22.04.5 LTS
Time: Mon Nov 24 13:30:21 2025 +04 elapsed time: 5879127.478827 seconds (68d 1h 5m 27s)




Welcome to the forum.

Er, why is the “max direct memory” 4GB ?

EDIT: I see you have set this option via -XX:MaxDirectMemorySize=4294967296. Note this memory, and the heap are NOT the total memory that elasticsearch might use. Look at the rss field from ps to get a guide to actual (current) process size in memory, and vsz for the process size in total (noting this number can be massively misleading).

FYI in Linux, the OS tries other use all memory as best it can, so a portion will almost aways be allocated as file system cache (shown as buff/cache in top).

The “once in a month” always happens at the same time, e.g the first Tuesday at 2am, or is it (as far as you can tell) random. During working hours or out of working hours? Does just one data node exit, and if so is it always the same one, or it varies across the 8?

Do you have OS logs around the time of the crash? What do they show?

Please also supply the elasticsearch version.

Elasticsearch also ships with its own internal monitoring, have you looked there? Before the crash, are there any sudden spikes?

Have you tried just increasing the value of /proc/sys/vm/max_map_count ?

1 Like

Thank you for reply. My Elasticsearch version is 9.0.0

  1. -XX :MaxDirectMemorySize is not set by me. I only set -Xms8g and -Xmx8g. I think Elasticsearch automatically set this setting.

  2. /proc/sys/vm/max_map_count value is 262144. Chatgpt said that it is recommendation.

a)
For Elasticsearch JVM
RSS: 10001156 KB ≈ 9.53 GB
VSZ: 209930568 KB ≈ 200 GB (normal for JVM)

b) Elasticseach-cli is 102MB

  1. “Once in a month“ is not static. Sometimes twice per a week. And not the same node, this problem happened in the different data nodes on different times (at night, in the morning). All data nodes have the same RAM, CPU and JVM options.

  2. dmesg | grep -Ei "oom|memory|fail|mmap"
    I saw vmemmap alloc failure error.

[637193.580638] kworker/u256:0: vmemmap alloc failure: order:9, mode:0x4cc0(GFP_KERNEL|__GFP_RETRY_MAYFAIL), nodemask=(null),cpuset=/,mems_allowed=0
[637193.580698] vmemmap_alloc_block+0xab/0x103
[637193.580703] vmemmap_alloc_block_buf+0x32/0x3c
[637193.580705] vmemmap_populate_hugepages+0xd1/0x2a4
[637193.580709] vmemmap_populate+0x3f/0xa9
[637193.580711] __populate_section_memmap+0x3c/0x57
[637193.580726] arch_add_memory+0x45/0x60
[637193.580728] add_memory_resource+0x12c/0x320
[637193.580730] __add_memory+0x40/0x90
[637193.580732] acpi_memory_enable_device+0xe1/0x160

  1. I checked journalctl
    journalctl -u elasticsearch.service
    Sep 17 12:24:37 prod-elastic-data01 systemd[1]: Stopping Elasticsearch...
    Sep 17 12:24:51 prod-elastic-data01 systemd[1]: elasticsearch.service: Deactivated successfully.
    Sep 17 12:24:51 prod-elastic-data01 systemd[1]: Stopped Elasticsearch.
    Sep 17 12:24:51 prod-elastic-data01 systemd[1]: elasticsearch.service: Consumed 1month 1d 1h 26min 49.671s CPU time.
    Sep 17 12:24:52 prod-elastic-data01 systemd[1]: Starting Elasticsearch...
    Sep 17 12:25:22 prod-elastic-data01 systemd[1]: Started Elasticsearch.
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: #
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: # There is insufficient memory for the Java Runtime Environment to continue.
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: # Native memory allocation (mmap) failed to map 16384 bytes. Error detail: committing reserved memory.
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: [thread 3962669 also had an error]
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: # An error report file with more information is saved as:
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406074]: # /var/log/elasticsearch/hs_err_pid2406074.log
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406001]: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4333466000, 16384, 0) failed; error='Not enough space' (errno=12)
    Nov 24 13:30:21 prod-elastic-data01 systemd-entrypoint[2406001]: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4362afa000, 16384, 0) failed; error='Not enough space' (errno=12)
    Nov 24 13:30:24 prod-elastic-data01 systemd-entrypoint[2406001]: ERROR: Elasticsearch exited unexpectedly, with exit code 1
    Nov 24 13:30:24 prod-elastic-data01 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
    Nov 24 13:30:24 prod-elastic-data01 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
    Nov 24 13:30:24 prod-elastic-data01 systemd[1]: elasticsearch.service: Consumed 2month 2d 52min 34.705s CPU time.