There is insufficient memory for the Java Runtime Environment to continue

A month since you last posted @Aysel_Guliyeva - any update? Nodes staying up? Discovered something interesting/new?

It helps other forum users to know if, and how, issues get resolved.

Hello, @RainTown . Today it crashed. And I checked logs:

2026-01-28 19:53:46 pid=1095422 maps=262146 rss=10897152KB
2026-01-28 19:53:51 pid=1095422 maps=262146 rss=10894044KB

when it restarted, the map count was very high. Max map count is set to 262146

in the other log:

Total: reserved=10459595KB, committed=9087899KB
malloc: 159607KB #813769, peak=234439KB #844942
mmap: reserved=10299988KB, committed=8928292KB

        `Java Heap (reserved=8388608KB, committed=8388608KB)
                    (mmap: reserved=8388608KB, committed=8388608KB, at peak)`
                  Class (reserved=1053377KB, committed=30081KB)
                        (classes #41416)
                        (  instance classes #39834, array classes #1582)
                        (malloc=4801KB #124707) (peak=4961KB #124131)
                        (mmap: reserved=1048576KB, committed=25280KB, at peak)
                        (  Metadata:   )
                        (    reserved=196608KB, committed=181568KB)
                        (    used=179527KB)
                        (    waste=2041KB =1.12%)
                        (  Class space:)
                        (    reserved=1048576KB, committed=25280KB)
                        (    used=23790KB)
                        (    waste=1490KB =5.90%)
                 Thread (reserved=222074KB, committed=25266KB)
                        (threads #218)
                        (stack: reserved=221320KB, committed=24512KB, peak=24512KB)
                        (malloc=502KB #1315) (peak=520KB #1526)
                        (arena=252KB #430) (peak=2172KB #54)
                   Code (reserved=281476KB, committed=144928KB)
                        (malloc=33787KB #53428) (peak=44992KB #111826)
                        (mmap: reserved=247688KB, committed=111140KB, at peak)
                        (arena=1KB #1) (peak=35KB #3)
                     GC (reserved=243913KB, committed=243913KB)
                        (malloc=46773KB #49227) (peak=76154KB #48045)
                        (mmap: reserved=197140KB, committed=197140KB, at peak)
                        (arena=0KB #0) (peak=136KB #8)
              GCCardSet (reserved=560KB, committed=560KB)
                        (malloc=560KB #8218) (peak=28372KB #18957)
               Compiler (reserved=2748KB, committed=2748KB)
                        (malloc=2584KB #1259) (peak=2628KB #1274)
                        (arena=164KB #6) (peak=83448KB #32)
               Internal (reserved=3819KB, committed=3815KB)
                        (malloc=3779KB #64878) (peak=3780KB #64977)
                        (mmap: reserved=40KB, committed=36KB, at peak)
                  Other (reserved=11575KB, committed=11575KB)
                        (malloc=11575KB #389) (peak=13280KB #337)
                 Symbol (reserved=35843KB, committed=35843KB)
                        (malloc=31903KB #474275) (peak=32783KB #530575)
                        (arena=3940KB #1) (at peak)

Native Memory Tracking (reserved=15540KB, committed=15540KB)
(malloc=1235KB #14572) (peak=1235KB #14577)
(tracking overhead=14305KB)

            Arena Chunk (reserved=105KB, committed=105KB)
                        (malloc=105KB #557) (peak=96986KB #2474)
                 Module (reserved=1244KB, committed=1244KB)
                        (malloc=1244KB #12385) (at peak)
              Safepoint (reserved=8KB, committed=8KB)
                        (mmap: reserved=8KB, committed=8KB, at peak)
        Synchronization (reserved=730KB, committed=730KB)
                        (malloc=730KB #7118) (peak=817KB #7214)
         Serviceability (reserved=20KB, committed=20KB)
                        (malloc=20KB #48) (peak=61KB #415)
              Metaspace (reserved=197845KB, committed=182805KB)
                        (malloc=1237KB #806) (peak=1270KB #981)
                        (mmap: reserved=196608KB, committed=181568KB, at peak)
   String Deduplication (reserved=1KB, committed=1KB)
                        (malloc=1KB #8) (at peak)
        Object Monitors (reserved=110KB, committed=110KB)
                        (malloc=110KB #564) (peak=1972KB #10098)

I think it didn’t affect. Because I applied it before and now I can see this setting in elasticsearch service. But nmap count reached to limit and elasticsearch service went to crash

Are your nodes still on 9.0.0?

Maybe the workaround did not take, not sure on that, or it maybe just bought some more time, but you did seem to hit the original "maps leak bug", right? Which is known and documented and apparently also fixed in newer releases.

In an earlier post you wrote that wc -l /proc/$pid/maps was in 85k range. Limit was 262146. So if you have logs going back since last restart, maybe you can see this count either increasing steadily/slowly, or something causing it to rapidly increase in short period of time? But either way, you ran out of slots on that node. It would be helpful for you to have the history of this value for the last couple of months on ALL your nodes, to see if it's impacting everywhere, or just some specific data nodes, and how it behaved with your load when running 9.0.0. For comparitive purposes.

IMvHO you have now enough evidence to support the version upgrade. I'd also suggest upgrade to the latest release, if you can.

EDIT: If you find its increasing slowly on all data nodes, and you cannot upgrade, then you could put in some kind of alert that when count passes say 200k, a managed restart of the specific data node is performed.

@RainTown
Thank you for answer.
Yes we haven't upgraded yet. I checked 2 other data nodes and the current nmap counts in those nodes are about 51K, 140K. So in this crashed node, the current map count is 21K after restarting. Now it is late here. Tomorrow I will check all nodes. We added a lot of APM rules, some ESQL query rules which run every 3 min. Restarting of nodes started since applying APM. I will check tomorrow and write.

P.S. I keep the last 3 day's log for nmap and tracking native memory log files.

You can share, and I will at least read what you share.

But now it's would take significant convincing for me to move away from the "upgrade from 9.0.0" advice. Which at least 2 others in the thread suggested too.

Yeah, thats a little bit disappointing, In the problem solving/investigating phase you should always try to keep all the diagnostic information you collected for as long as you are investigating the case. But it is what it is.

This is a default setting in the OOTB jvm.options file supplied with elasticsearch in 9.2.4, and several releases prior. It is accompanied by a explanatory comment. See:

$ grep -Fl org.apache.lucene.store.MMapDirectory.sharedArenaMaxPermits=1 /proc/[0-9]*/cmdline /proc/5823/cmdline

should tell you if that setting is applied on your running JVMs, but all upgrade bring a ton of other patches/fixes too.