OutOfMemoryError Java Heap Space


(Animageofmine) #1

Any idea why should this happen?

Configuration: 15 GB for lucene and 15GB for Elasticsearch. Let me know if you need more information.

[2017-04-25T01:23:57,673][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [data4] fatal error in thread [elasticsearch[data4][search][T#19]], exiting
java.lang.OutOfMemoryError: Java heap space
	at org.apache.lucene.search.FieldComparator$LongComparator.<init>(FieldComparator.java:406) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource$1.<init>(LongValuesComparatorSource.java:64) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource.newComparator(LongValuesComparatorSource.java:64) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.apache.lucene.search.SortField.getComparator(SortField.java:361) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.apache.lucene.search.FieldValueHitQueue.<init>(FieldValueHitQueue.java:142) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.apache.lucene.search.FieldValueHitQueue.<init>(FieldValueHitQueue.java:32) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.apache.lucene.search.FieldValueHitQueue$OneComparatorFieldValueHitQueue.<init>(FieldValueHitQueue.java:63) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.apache.lucene.search.FieldValueHitQueue.create(FieldValueHitQueue.java:166) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:492) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:211) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:106) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:259) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:373) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:324) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.action.search.SearchTransportService$9.messageReceived(SearchTransportService.java:321) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1385) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527) ~[elasticsearch-5.1.1.jar:5.1.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.1.1.jar:5.1.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_111]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

(David Pilato) #2

Not enough memory.

What is your setup ?

How many nodes, RAM, indices, shards?


(Animageofmine) #3

I figured out the problem. It was happening because of deep pagination (about 4M, we changed the default of 10k a while back).

Set up: 5 data nodes, 3 dedicated master nodes
Data nodes: 16 cores, 30 GB memory (15 for ES, 15 for Lucene).
Master: 4 cores, 15 GB memory
Number of shards = 1, replicas = 2


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.