SearchContextMissingException and out-of-memory crash

I have a single-node, 5-shard ES 0.19.0 setup with a 45MB corpus, 70GB RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running a batch job against ES last night (5000 queries at about 80 per second), we got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl] [410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410836]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl] [410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410815]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are these errors significant? I read in another thread that they don't affect results and are merely advisory. I was also surprised to have it run out of memory. Do I really need to allocate Java more RAM, or is this a case of a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package 6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

https://bugs.launchpad.net/ubuntu/+source/openjdk-6/

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack: 0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000], sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint, requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -Des.config=/etc/elasticsearch/elasticsearch.yml -Des.path.home=/usr/share/elasticsearch -Des.path.logs=/var/log/elasticsearch -Des.path.data=/hork/elasticsearch-data -Des.path.work=/tmp/elasticsearch -Des.path.conf=/etc/elasticsearch
java_command: org.elasticsearch.bootstrap.ElasticSearch
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/lib/jvm/java-6-openjdk
PATH=/bin:/usr/bin:/sbin:/usr/sbin
USERNAME=root
LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5e0000], sa_mask[0]=0x00000000, sa_flags=0x10000004
SIGHUP: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGTERM: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGQUIT: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004

--------------- S Y S T E M ---------------

OS:Ubuntu 10.04 (lucid)
uname:Linux 2.6.32-312-ec2 #24-Ubuntu SMP Fri Jan 7 18:30:50 UTC 2011 x86_64
libc:glibc 2.11.1 NPTL 2.11.1
rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 65535, AS infinity
load average:1.36 1.81 1.69

/proc/meminfo:
MemTotal: 71700644 kB
MemFree: 35154760 kB
Buffers: 103768 kB
Cached: 11093932 kB
SwapCached: 0 kB
Active: 22829708 kB
Inactive: 10672756 kB
Active(anon): 22304940 kB
Inactive(anon): 144 kB
Active(file): 524768 kB
Inactive(file): 10672612 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 8044 kB
Writeback: 0 kB
AnonPages: 22305060 kB
Mapped: 7788820 kB
Shmem: 196 kB
Slab: 464488 kB
SReclaimable: 123580 kB
SUnreclaim: 340908 kB
KernelStack: 260240 kB
PageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 35850320 kB
Committed_AS: 25254012 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 6948 kB
VmallocChunk: 34359728868 kB
DirectMap4k: 71680000 kB
DirectMap2M: 0 kB

CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht

Memory: 4k page, physical 71700644k(35154760k free), swap 0k(0k free)

vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Feb 17 2012 07:09:45 by "buildd" with gcc 4.4.3

time: Thu Mar 8 06:40:36 2012
elapsed time: 75745 seconds

Many thanks!
Erik

Heya,

That OOM is strange…, its not really because there isn't enough heap. I googled it a bit, and it seems to come either from a bug in the JVM (on an older 1.6.0_03 version), or by running out of native memory. I suggest two things here: The first, upgrade to the latest java version (latest 1.6.0 is update 31), and the second is to try and not use mmapfs (maybe the native buffers used there are causing the problem). The mmapfs I would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES 0.19.0 setup with a 45MB corpus, 70GB RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running a batch job against ES last night (5000 queries at about 80 per second), we got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl] [410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410836]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl] [410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410815]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are these errors significant? I read in another thread that they don't affect results and are merely advisory. I was also surprised to have it run out of memory. Do I really need to allocate Java more RAM, or is this a case of a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package 6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

Bugs : openjdk-6 package : Ubuntu

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack: 0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000], sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint, requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -Des.config=/etc/elasticsearch/elasticsearch.yml -Des.path.home=/usr/share/elasticsearch -Des.path.logs=/var/log/elasticsearch -Des.path.data=/hork/elasticsearch-data -Des.path.work=/tmp/elasticsearch -Des.path.conf=/etc/elasticsearch
java_command: org.elasticsearch.bootstrap.Elasticsearch
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/lib/jvm/java-6-openjdk
PATH=/bin:/usr/bin:/sbin:/usr/sbin
USERNAME=root
LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5e0000], sa_mask[0]=0x00000000, sa_flags=0x10000004
SIGHUP: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGTERM: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGQUIT: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004

--------------- S Y S T E M ---------------

OS:Ubuntu 10.04 (lucid)
uname:Linux 2.6.32-312-ec2 #24-Ubuntu SMP Fri Jan 7 18:30:50 UTC 2011 x86_64
libc:glibc 2.11.1 NPTL 2.11.1
rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 65535, AS infinity
load average:1.36 1.81 1.69

/proc/meminfo:
MemTotal: 71700644 kB
MemFree: 35154760 kB
Buffers: 103768 kB
Cached: 11093932 kB
SwapCached: 0 kB
Active: 22829708 kB
Inactive: 10672756 kB
Active(anon): 22304940 kB
Inactive(anon): 144 kB
Active(file): 524768 kB
Inactive(file): 10672612 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 8044 kB
Writeback: 0 kB
AnonPages: 22305060 kB
Mapped: 7788820 kB
Shmem: 196 kB
Slab: 464488 kB
SReclaimable: 123580 kB
SUnreclaim: 340908 kB
KernelStack: 260240 kB
PageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 35850320 kB
Committed_AS: 25254012 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 6948 kB
VmallocChunk: 34359728868 kB
DirectMap4k: 71680000 kB
DirectMap2M: 0 kB

CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht

Memory: 4k page, physical 71700644k(35154760k free), swap 0k(0k free)

vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Feb 17 2012 07:09:45 by "buildd" with gcc 4.4.3

time: Thu Mar 8 06:40:36 2012
elapsed time: 75745 seconds

Many thanks!
Erik

Hi,

I think we are getting this same error. We have recently updated our
ES from 0.16.0 to 0.19.0 and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:632)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainByteBufferAllocator.java:
55)
at
org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(CachingByteBufferAllocator.java:
51)
at
org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.java:
101)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffer(ByteBufferIndexOutput.java:
106)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBufferIndexOutput.java:
93)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.flushBuffer(Store.java:580)
at
org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedIndexOutput.java:
101)
at
org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutput.java:
88)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.flush(Store.java:593)
at
org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutput.java:
119)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.close(Store.java:565)
at org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
322)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:
228)
at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:
3940)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:
388)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:
90)
at org.apache.lucene.index.ConcurrentMergeScheduler
$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is that
correct? Right now we are not setting it and I am getting conflicting
information about what is the default value. Some claim the default is
64M and it is not enough, but Oracle site says that if you don't set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch: 0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't enough heap. I googled it a bit, and it seems to come either from a bug in the JVM (on an older 1.6.0_03 version), or by running out of native memory. I suggest two things here: The first, upgrade to the latest java version (latest 1.6.0 is update 31), and the second is to try and not use mmapfs (maybe the native buffers used there are causing the problem). The mmapfs I would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES 0.19.0 setup with a 45MB corpus, 70GB RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running a batch job against ES last night (5000 queries at about 80 per second), we got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl] [410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410836]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java :424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet ch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 603)
at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl] [410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410815]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java :424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet ch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 603)
at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are these errors significant? I read in another thread that they don't affect results and are merely advisory. I was also surprised to have it run out of memory. Do I really need to allocate Java more RAM, or is this a case of a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package 6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

#Bugs : openjdk-6 package : Ubuntu

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack: 0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000], sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint, requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -Des.config=/etc/elasticsearch/elasticsearch.yml -Des.path.home=/usr/share/elasticsearch -Des.path.logs=/var/log/elasticsearch -Des.path.data=/hork/elasticsearch-data -Des.path.work=/tmp/elasticsearch -Des.path.conf=/etc/elasticsearch
java_command: org.elasticsearch.bootstrap.Elasticsearch
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/lib/jvm/java-6-openjdk
PATH=/bin:/usr/bin:/sbin:/usr/sbin
USERNAME=root
LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/j vm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd6 4
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5e0000], sa_mask[0]=0x00000000, sa_flags=0x10000004
SIGHUP: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGINT: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGTERM: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGQUIT: [libjvm.so+0x5e2a70], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004

--------------- S Y S T E M ---------------

OS:Ubuntu 10.04 (lucid)
uname:Linux 2.6.32-312-ec2 #24-Ubuntu SMP Fri Jan 7 18:30:50 UTC 2011 x86_64
libc:glibc 2.11.1 NPTL 2.11.1
rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 65535, AS infinity
load average:1.36 1.81 1.69

/proc/meminfo:
MemTotal: 71700644 kB
MemFree: 35154760 kB
Buffers: 103768 kB
Cached: 11093932 kB
SwapCached: 0 kB
Active: 22829708 kB
Inactive: 10672756 kB
Active(anon): 22304940 kB
Inactive(anon): 144 kB
Active(file): 524768 kB
Inactive(file): 10672612 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 8044 kB
Writeback: 0 kB
AnonPages: 22305060 kB
Mapped: 7788820 kB
Shmem: 196 kB
Slab: 464488 kB
SReclaimable: 123580 kB
SUnreclaim: 340908 kB
KernelStack: 260240 kB
PageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 35850320 kB
Committed_AS: 25254012 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 6948 kB
VmallocChunk: 34359728868 kB
DirectMap4k: 71680000 kB
DirectMap2M: 0 kB

CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht

Memory: 4k page, physical 71700644k(35154760k free), swap 0k(0k free)

vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Feb 17 2012 07:09:45 by "buildd" with gcc 4.4.3

time: Thu Mar 8 06:40:36 2012
elapsed time: 75745 seconds

Many thanks!
Erik

Just realized i did not provide full JVM version info:

java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

On Mar 26, 3:56 pm, Hovanes hov...@gmail.com wrote:

Hi,

I think we are getting this same error. We have recently updated our
ES from 0.16.0 to0.19.0and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffermemory
at java.nio.Bits.reserveMemory(Bits.java:632)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at
org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainB yteBufferAllocator.java:
55)
at
org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(Cach ingByteBufferAllocator.java:
51)
at
org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.jav a:
101)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffe r(ByteBufferIndexOutput.java:
106)
at
org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBuf ferIndexOutput.java:
93)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.flushBuffer(Store.java:580)
at
org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedInd exOutput.java:
101)
at
org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutp ut.java:
88)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.flush(Store.java:593)
at
org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutp ut.java:
119)
at org.elasticsearch.index.store.Store
$StoreIndexOutput.close(Store.java:565)
at org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
322)
at
org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:
228)
at
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
at
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:
3940)
at
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSch eduler.java:
388)
at
org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingCo ncurrentMergeScheduler.java:
90)
at org.apache.lucene.index.ConcurrentMergeScheduler
$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is that
correct? Right now we are not setting it and I am getting conflicting
information about what is the default value. Some claim the default is
64M and it is not enough, but Oracle site says that if you don't set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch:0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't enough heap. I googled it a bit, and it seems to come either from a bug in the JVM (on an older 1.6.0_03 version), or by running out of nativememory. I suggest two things here: The first, upgrade to the latest java version (latest 1.6.0 is update 31), and the second is to try and not use mmapfs (maybe the native buffers used there are causing the problem). The mmapfs I would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES0.19.0setup with a 45MB corpus, 70GB RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running a batch job against ES last night (5000 queries at about 80 per second), we got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl] [410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410836]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java :424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet ch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 603)
at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl] [410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context found for id [410815]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java :424)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet ch(SearchServiceTransportAction.java:344)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 603)
at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are these errors significant? I read in another thread that they don't affect results and are merely advisory. I was also surprised to have it run out ofmemory. Do I really need to allocate Java more RAM, or is this a case of a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package 6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

#Bugs : openjdk-6 package : Ubuntu

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack: 0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000], sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint, requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -Des.config=/etc/elasticsearch/elasticsearch.yml -Des.path.home=/usr/share/elasticsearch -Des.path.logs=/var/log/elasticsearch -Des.path.data=/hork/elasticsearch-data -Des.path.work=/tmp/elasticsearch -Des.path.conf=/etc/elasticsearch
java_command: org.elasticsearch.bootstrap.Elasticsearch
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/lib/jvm/java-6-openjdk
PATH=/bin:/usr/bin:/sbin:/usr/sbin
USERNAME=root
LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/j vm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd6 4
SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGBUS: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGILL: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5e0000],

...

read more »

Are you using the in memory option for the index store? By default, it uses
the direct buffers (off heap) storage, so you might need to predefine how
much memory can be allocated using the XX:MaxDirectMemorySize parameter to
the JVM. Another option is to simply use the default file system based
storage, its usually fast enough..., you can use mmapfs if you want to use
memory mapped files.

On Tue, Mar 27, 2012 at 1:28 AM, Hovanes hovo73@gmail.com wrote:

Just realized i did not provide full JVM version info:

java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

On Mar 26, 3:56 pm, Hovanes hov...@gmail.com wrote:

Hi,

I think we are getting this same error. We have recently updated our
ES from 0.16.0 to0.19.0and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffermemory
at java.nio.Bits.reserveMemory(Bits.java:632)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at

org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainB
yteBufferAllocator.java:

  1. at
    

org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(Cach
ingByteBufferAllocator.java:

  1. at
    

org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.jav
a:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffe
r(ByteBufferIndexOutput.java:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBuf
ferIndexOutput.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flushBuffer(Store.java:580)
at

org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedInd
exOutput.java:

  1. at

org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flush(Store.java:593)
at

org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    $StoreIndexOutput.close(Store.java:565)
    at org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
  2. at
    org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:
  3. at
    org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
    at
    org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:
  4. at

org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSch
eduler.java:

  1. at

org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingCo
ncurrentMergeScheduler.java:

  1. at org.apache.lucene.index.ConcurrentMergeScheduler
    

$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is that
correct? Right now we are not setting it and I am getting conflicting
information about what is the default value. Some claim the default is
64M and it is not enough, but Oracle site says that if you don't set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch:0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't enough
heap. I googled it a bit, and it seems to come either from a bug in the JVM
(on an older 1.6.0_03 version), or by running out of nativememory. I
suggest two things here: The first, upgrade to the latest java version
(latest 1.6.0 is update 31), and the second is to try and not use mmapfs
(maybe the native buffers used there are causing the problem). The mmapfs I
would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES0.19.0setup with a 45MB corpus, 70GB
RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running
a batch job against ES last night (5000 queries at about 80 per second), we
got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl]
[410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410836]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl]
[410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410815]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are
these errors significant? I read in another thread that they don't affect
results and are merely advisory. I was also surprised to have it run out
ofmemory. Do I really need to allocate Java more RAM, or is this a case of
a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for

ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64

compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package

6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

#Bugs : openjdk-6 package : Ubuntu

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack:
0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000],
sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code,
C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint,
requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid
-Des.path.home=/usr/share/elasticsearch
-Des.config=/etc/elasticsearch/elasticsearch.yml
-Des.path.home=/usr/share/elasticsearch
-Des.path.logs=/var/log/elasticsearch
-Des.path.data=/hork/elasticsearch-data -Des.path.work=/tmp/elasticsearch
-Des.path.conf=/etc/elasticsearch
java_command: org.elasticsearch.bootstrap.Elasticsearch
Launcher Type: SUN_STANDARD

Environment Variables:
JAVA_HOME=/usr/lib/jvm/java-6-openjdk
PATH=/bin:/usr/bin:/sbin:/usr/sbin
USERNAME=root

LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/j
vm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd6
4

SHELL=/bin/bash

Signal Handlers:
SIGSEGV: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGBUS: [libjvm.so+0x7240c0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGFPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGPIPE: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGXFSZ: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGILL: [libjvm.so+0x5e08f0], sa_mask[0]=0x7ffbfeff,
sa_flags=0x10000004
SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000
SIGUSR2: [libjvm.so+0x5e0000],

...

read more »

I don't think we are using in memory option, as we are using out of
box configuration right now.

We are running it on CentOS so judging from the page below it should
default to niofs storage, correct?

We are not setting XX:MaxDirectMemorySize, so it should be unbounded,
correct?
http://docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm

I can try using mmapfs, but I haven't seen an example of how to do it,
and I am not sure how to do it even after looking at this page:

Thanks for your help,
Hovanes

On Mar 27, 10:39 am, Shay Banon kim...@gmail.com wrote:

Are you using the in memory option for the index store? By default, it uses
the direct buffers (off heap) storage, so you might need to predefine how
much memory can be allocated using the XX:MaxDirectMemorySize parameter to
the JVM. Another option is to simply use the default file system based
storage, its usually fast enough..., you can use mmapfs if you want to use
memory mapped files.

On Tue, Mar 27, 2012 at 1:28 AM, Hovanes hov...@gmail.com wrote:

Just realized i did not provide full JVM version info:

java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

On Mar 26, 3:56 pm, Hovanes hov...@gmail.com wrote:

Hi,

I think we are getting this same error. We have recently updated our
ES from 0.16.0 to0.19.0and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffermemory
at java.nio.Bits.reserveMemory(Bits.java:632)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at

org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainB
yteBufferAllocator.java:

  1. at
    

org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(Cach
ingByteBufferAllocator.java:

  1. at
    

org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.jav
a:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffe
r(ByteBufferIndexOutput.java:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBuf
ferIndexOutput.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flushBuffer(Store.java:580)
at

org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedInd
exOutput.java:

  1. at

org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flush(Store.java:593)
at

org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    $StoreIndexOutput.close(Store.java:565)
    at org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
  2. at
    org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:
  3. at
    org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
    at
    org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:
  4. at

org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSch
eduler.java:

  1. at

org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingCo
ncurrentMergeScheduler.java:

  1. at org.apache.lucene.index.ConcurrentMergeScheduler
    

$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is that
correct? Right now we are not setting it and I am getting conflicting
information about what is the default value. Some claim the default is
64M and it is not enough, but Oracle site says that if you don't set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch:0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't enough
heap. I googled it a bit, and it seems to come either from a bug in the JVM
(on an older 1.6.0_03 version), or by running out of nativememory. I
suggest two things here: The first, upgrade to the latest java version
(latest 1.6.0 is update 31), and the second is to try and not use mmapfs
(maybe the native buffers used there are causing the problem). The mmapfs I
would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES0.19.0setup with a 45MB corpus, 70GB
RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running
a batch job against ES last night (5000 queries at about 80 per second), we
got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl]
[410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410836]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl]
[410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410815]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are
these errors significant? I read in another thread that they don't affect
results and are merely advisory. I was also surprised to have it run out
ofmemory. Do I really need to allocate Java more RAM, or is this a case of
a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for

ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64

compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package

6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

instructions how to reproduce the bug and visit:

#Bugs : openjdk-6 package : Ubuntu

--------------- T H R E A D ---------------

Current thread (0x00000000010dd800): VMThread [stack:
0x00007f276ceb3000,0x00007f276cfb4000] [id=27892]

Stack: [0x00007f276ceb3000,0x00007f276cfb4000],
sp=0x00007f276cfb21d0, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code,
C=native code)
V [libjvm.so+0x7234bc]
V [libjvm.so+0x7236db]
V [libjvm.so+0x32fc19]
V [libjvm.so+0x21ac94]
V [libjvm.so+0x21ad24]
V [libjvm.so+0x71e3c0]
V [libjvm.so+0x7195ce]
V [libjvm.so+0x260588]
V [libjvm.so+0x3bf2ee]
V [libjvm.so+0x3bf755]
V [libjvm.so+0x724748]
V [libjvm.so+0x72a68c]
V [libjvm.so+0x72931a]
V [libjvm.so+0x7298f6]
V [libjvm.so+0x729bf2]
V [libjvm.so+0x5e42e2]

VM_Operation (0x00007f1ea21c1ea0): GenCollectFull, mode: safepoint,
requested by thread 0x00000000013af000

[snip]

VM Arguments:
jvm_args: -Xms19g -Xmx19g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError

...

read more »

I just found out that our admin has set index store type to memory in
int environment without my knowledge. So I guess we are using memory
store. Is that the reason? I am going to try the default, but I am
just curios if there is a way to get it to work?

Thanks again,
Hovanes

On Mar 27, 11:13 am, Hovanes hov...@gmail.com wrote:

I don't think we are using inmemoryoption, as we are using out of
box configuration right now.

We are running it on CentOS so judging from the page below it should
default to niofs storage, correct?Elasticsearch Platform — Find real-time answers at scale | Elastic

We are not setting XX:MaxDirectMemorySize, so it should be unbounded,
correct?http://docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm

I can try using mmapfs, but I haven't seen an example of how to do it,
and I am not sure how to do it even after looking at this page:Elasticsearch Platform — Find real-time answers at scale | Elastic

Thanks for your help,
Hovanes

On Mar 27, 10:39 am, Shay Banon kim...@gmail.com wrote:

Are you using the inmemoryoption for the index store? By default, it uses
the direct buffers (off heap) storage, so you might need to predefine how
muchmemorycan be allocated using the XX:MaxDirectMemorySize parameter to
the JVM. Another option is to simply use the default file system based
storage, its usually fast enough..., you can use mmapfs if you want to use
memorymapped files.

On Tue, Mar 27, 2012 at 1:28 AM, Hovanes hov...@gmail.com wrote:

Just realized i did not provide full JVM version info:

java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

On Mar 26, 3:56 pm, Hovanes hov...@gmail.com wrote:

Hi,

I think we are getting this same error. We have recently updated our
ES from 0.16.0 to0.19.0and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffermemory
at java.nio.Bits.reserveMemory(Bits.java:632)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at

org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainB
yteBufferAllocator.java:

  1. at
    

org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(Cach
ingByteBufferAllocator.java:

  1. at
    

org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.jav
a:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffe
r(ByteBufferIndexOutput.java:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBuf
ferIndexOutput.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flushBuffer(Store.java:580)
at

org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedInd
exOutput.java:

  1. at

org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flush(Store.java:593)
at

org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutp
ut.java:

  1. at org.elasticsearch.index.store.Store
    $StoreIndexOutput.close(Store.java:565)
    at org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
  2. at
    org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:
  3. at
    org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
    at
    org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:
  4. at

org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSch
eduler.java:

  1. at

org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingCo
ncurrentMergeScheduler.java:

  1. at org.apache.lucene.index.ConcurrentMergeScheduler
    

$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is that
correct? Right now we are not setting it and I am getting conflicting
information about what is the default value. Some claim the default is
64M and it is not enough, but Oracle site says that if you don't set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16 11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch:0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't enough
heap. I googled it a bit, and it seems to come either from a bug in the JVM
(on an older 1.6.0_03 version), or by running out of nativememory. I
suggest two things here: The first, upgrade to the latest java version
(latest 1.6.0 is update 31), and the second is to try and not use mmapfs
(maybe the native buffers used there are causing the problem). The mmapfs I
would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES0.19.0setup with a 45MB corpus, 70GB
RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While running
a batch job against ES last night (5000 queries at about 80 per second), we
got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot Grrl]
[410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410836]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot Grrl]
[410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search
context found for id [410815]
at
org.elasticsearch.search.SearchService.findContext(SearchService.java:451)
at
org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java
:424)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet
ch(SearchServiceTransportAction.java:344)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)
at
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As
yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception. Are
these errors significant? I read in another thread that they don't affect
results and are merely advisory. I was also surprised to have it run out
ofmemory. Do I really need to allocate Java more RAM, or is this a case of
a fast workload outrunning the GC somehow. (I know little about Java's GC.)

A fatal error has been detected by the Java Runtime Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for

ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879, tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64

compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package

6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

...

read more »

Yea, I think tahts the reason. The direct memory that can be allocated by
the JVM is not unbounded, and can be controlled by the setting I mentioned
in the previous mail. Once its exhausted, you get the mentioned failure.

On Tue, Mar 27, 2012 at 8:17 PM, Hovanes hovo73@gmail.com wrote:

I just found out that our admin has set index store type to memory in
int environment without my knowledge. So I guess we are using memory
store. Is that the reason? I am going to try the default, but I am
just curios if there is a way to get it to work?

Thanks again,
Hovanes

On Mar 27, 11:13 am, Hovanes hov...@gmail.com wrote:

I don't think we are using inmemoryoption, as we are using out of
box configuration right now.

We are running it on CentOS so judging from the page below it should
default to niofs storage, correct?
Elasticsearch Platform — Find real-time answers at scale | Elastic

We are not setting XX:MaxDirectMemorySize, so it should be unbounded,
correct?http://docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm

I can try using mmapfs, but I haven't seen an example of how to do it,
and I am not sure how to do it even after looking at this page:
Elasticsearch Platform — Find real-time answers at scale | Elastic

Thanks for your help,
Hovanes

On Mar 27, 10:39 am, Shay Banon kim...@gmail.com wrote:

Are you using the inmemoryoption for the index store? By default, it
uses
the direct buffers (off heap) storage, so you might need to predefine
how
muchmemorycan be allocated using the XX:MaxDirectMemorySize parameter
to
the JVM. Another option is to simply use the default file system based
storage, its usually fast enough..., you can use mmapfs if you want to
use
memorymapped files.

On Tue, Mar 27, 2012 at 1:28 AM, Hovanes hov...@gmail.com wrote:

Just realized i did not provide full JVM version info:

java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

On Mar 26, 3:56 pm, Hovanes hov...@gmail.com wrote:

Hi,

I think we are getting this same error. We have recently updated
our
ES from 0.16.0 to0.19.0and now we are getting the following error
while trying to index. This happens in our integration environment
running Linux (CentOS release 5.2 (Final)) but not on my local dev
box
running Windows7 64. Our JVM version was post 1.6.0_03, but we went
ahead and updated to the latest (1.6.0_31-b04) as recommended, but
we
are are still getting this error:

[2012-03-26 14:17:17,132][WARN ][index.merge.scheduler ]
[webint21.vm.local] [geos][0] failed to merge
java.lang.OutOfMemoryError: Direct buffermemory
at java.nio.Bits.reserveMemory(Bits.java:632)
at
java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
at

org.apache.lucene.store.bytebuffer.PlainByteBufferAllocator.allocate(PlainB

yteBufferAllocator.java:

  1. at
    

org.apache.lucene.store.bytebuffer.CachingByteBufferAllocator.allocate(Cach

ingByteBufferAllocator.java:

  1. at
    

org.elasticsearch.cache.memory.ByteBufferCache.allocate(ByteBufferCache.jav

a:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.switchCurrentBuffe

r(ByteBufferIndexOutput.java:

  1. at

org.apache.lucene.store.bytebuffer.ByteBufferIndexOutput.writeBytes(ByteBuf

ferIndexOutput.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flushBuffer(Store.java:580)
at

org.apache.lucene.store.OpenBufferedIndexOutput.flushBuffer(OpenBufferedInd

exOutput.java:

  1. at

org.apache.lucene.store.OpenBufferedIndexOutput.flush(OpenBufferedIndexOutp

ut.java:

  1. at org.elasticsearch.index.store.Store
    

$StoreIndexOutput.flush(Store.java:593)
at

org.apache.lucene.store.OpenBufferedIndexOutput.close(OpenBufferedIndexOutp

ut.java:

  1. at org.elasticsearch.index.store.Store
    $StoreIndexOutput.close(Store.java:565)
    at
    org.apache.lucene.index.FieldInfos.write(FieldInfos.java:
  2. at

org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:

  1. at
    org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:108)
    at

org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4295)

    at

org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:

  1. at

org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeSch

eduler.java:

  1. at

org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingCo

ncurrentMergeScheduler.java:

  1. at org.apache.lucene.index.ConcurrentMergeScheduler
    

$MergeThread.run(ConcurrentMergeScheduler.java:456)

I read somewhere that we need to increase MaxDirectMemorySize. Is
that
correct? Right now we are not setting it and I am getting
conflicting
information about what is the default value. Some claim the
default is
64M and it is not enough, but Oracle site says that if you don't
set
it, the default 0 which means it is is unbounded (http://
docs.oracle.com/cd/E15289_01/doc.40/e15062/optionxx.htm). Is this
what
needs to be done?

Thanks,
Hovanes

Our env:

OS: CentOS release 5.2 (Final)
Kernel: Linux webint21 2.6.18-92.1.22.el5 #1 SMP Tue Dec 16
11:57:43
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
JVM: Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Elasticsearch:0.19.0

On Mar 8, 3:38 pm, Shay Banon kim...@gmail.com wrote:

Heya,

That OOM is strange…, its not really because there isn't
enough
heap. I googled it a bit, and it seems to come either from a bug in
the JVM
(on an older 1.6.0_03 version), or by running out of nativememory. I
suggest two things here: The first, upgrade to the latest java
version
(latest 1.6.0 is update 31), and the second is to try and not use
mmapfs
(maybe the native buffers used there are causing the problem). The
mmapfs I
would try only after upgrading the JVM version.

On Thursday, March 8, 2012 at 8:38 PM, Erik Rose wrote:

I have a single-node, 5-shard ES0.19.0setup with a 45MB
corpus, 70GB
RAM, and a 19GB heap size for ES. I'm using the mmapfs store. While
running
a batch job against ES last night (5000 queries at about 80 per
second), we
got a whole bunch of these errors…

[2012-03-08 06:40:29,789][DEBUG][action.search.type ] [Riot
Grrl]
[410836] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No
search
context found for id [410836]
at

org.elasticsearch.search.SearchService.findContext(SearchService.java:451)

at

org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java

:424)

at

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet

ch(SearchServiceTransportAction.java:344)

at

org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As

yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)

at

org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As

yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)

at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)
[2012-03-08 06:40:29,806][DEBUG][action.search.type ] [Riot
Grrl]
[410815] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No
search
context found for id [410815]
at

org.elasticsearch.search.SearchService.findContext(SearchService.java:451)

at

org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java

:424)

at

org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFet

ch(SearchServiceTransportAction.java:344)

at

org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As

yncAction.executeFetch(TransportSearchQueryThenFetchAction.java:149)

at

org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$As

yncAction$2.run(TransportSearchQueryThenFetchAction.java:136)

at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:

at java.lang.Thread.run(Thread.java:636)

…and then, finally, ES crashed with an OutOfMemory exception.
Are
these errors significant? I read in another thread that they don't
affect
results and are merely advisory. I was also surprised to have it run
out
ofmemory. Do I really need to allocate Java more RAM, or is this a
case of
a fast workload outrunning the GC somehow. (I know little about
Java's GC.)

A fatal error has been detected by the Java Runtime

Environment:

java.lang.OutOfMemoryError: requested 32744 bytes for

ChunkPool::allocate. Out of swap space?

Internal Error (allocation.cpp:166), pid=27879,

tid=139807308855040

Error: ChunkPool::allocate

JRE version: 6.0_20-b20

Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode

linux-amd64

compressed oops)

Derivative: IcedTea6 1.9.13

Distribution: Ubuntu 10.04.1 LTS, package

6b20-1.9.13-0ubuntu1~10.04.1

If you would like to submit a bug report, please include

...

read more »