OOM due to merge exception

Hi all,

we are using ES 1.3.7 cluster with 2 data only nodes , 2 master nodes, one data cum master node.

last two days , we getting more OOM frequently due to merge exception.

its is too confusing ,can anyone suggest the solution to overcome it

Upgrading would help.

Otherwise please provide the full error as well as more info on your cluster.

thanks @warkolm

we are using 1.5 GB as heapsize for each node.

then we getting these errors.

[2017-01-12 07:19:27,551][WARN ][index.merge.scheduler ] [xxxxxx] [yyyyyy][0] failed to merge
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.packed.Packed64SingleBlock.(Packed64SingleBlock.java:53)
at org.apache.lucene.util.packed.Packed64SingleBlock$Packed64SingleBlock2.(Packed64SingleBlock.java:280)
at org.apache.lucene.util.packed.Packed64SingleBlock.create(Packed64SingleBlock.java:223)
at org.apache.lucene.util.packed.Packed64SingleBlock.create(Packed64SingleBlock.java:211)
at org.apache.lucene.util.packed.PackedInts.getReaderNoHeader(PackedInts.java:784)
at org.apache.lucene.codecs.lucene49.Lucene49NormsProducer.loadNorms(Lucene49NormsProducer.java:192)
at org.apache.lucene.codecs.lucene49.Lucene49NormsProducer.getNumeric(Lucene49NormsProducer.java:134)
at org.apache.lucene.index.SegmentCoreReaders.getNormValues(SegmentCoreReaders.java:176)
at org.apache.lucene.index.SegmentReader.getNormValues(SegmentReader.java:592)
at org.apache.lucene.index.SegmentMerger.mergeNorms(SegmentMerger.java:248)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:133)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4225)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3820)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:106)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[2017-01-12 07:19:28,577][WARN ][index.engine.internal ][xxxxxx] [yyyyyy][0] failed engine [merge exception]
[2017-01-12 07:19:30,727][DEBUG][action.admin.cluster.node.stats] [xxxxxxx] failed to execute on node [p7ntM6dKRBSwC_KyTg8fFQ]
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:700)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:714)
at org.apache.lucene.index.IndexWriter.ramBytesUsed(IndexWriter.java:464)
at org.elasticsearch.index.engine.internal.InternalEngine.segmentsStats(InternalEngine.java:1167)
at org.elasticsearch.index.shard.service.InternalIndexShard.segmentStats(InternalIndexShard.java:540)
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:166)
at org.elasticsearch.action.admin.indices.stats.ShardStats.(ShardStats.java:49)
at org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:209)
at org.elasticsearch.node.service.NodeService.stats(NodeService.java:156)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:95)
at org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)
at org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:140)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2017-01-12 07:19:30,740][WARN ][cluster.action.shard ] [xxxxxx] [yyyyyy][0] sending failed shard for [yyyyyy][0], node[p7ntM6dKRBSwC_KyTg8fFQ], [P], s[STARTED], indexUUID [na], reason [engine failure, message [merge exception][MergeException[java.lang.OutOfMemoryError: Java heap space]; nested: OutOfMemoryError[Java heap space]; ]]

can you please help us !!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.