/_cat/indices leads to OutOfMemory

Hi, I am running a single elasticsearch 5.5.0 node on a server with 4 CPUs, 32 GB RAM (16 GB for ES) and 2 TB SSD Drive. It has about 50 Million documents in 2 indices. One index has 5 shards and one has only 1. Since some time now the cluster has status green but as soon as I navigate to /_cat/indices elasticsearch reads huge amounts of data into the heap an finally crashes with a Java Heap Space Exception. The Stacktrace is as follows:

[2018-06-15T13:55:35,349][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] fatal error in thread [elasticsearch[_L29Y8s][management][T#3]], exiting
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.FST.(FST.java:387) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.util.fst.FST.(FST.java:313) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.search.suggest.document.NRTSuggester.load(NRTSuggester.java:306) ~[lucene-suggest-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:30:09]
at org.apache.lucene.search.suggest.document.CompletionsTermsReader.suggester(CompletionsTermsReader.java:66) ~[lucene-suggest-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:30:09]
at org.apache.lucene.search.suggest.document.CompletionTerms.suggester(CompletionTerms.java:71) ~[lucene-suggest-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:30:09]
at org.elasticsearch.search.suggest.completion.CompletionFieldStats.completionStats(CompletionFieldStats.java:57) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.index.shard.IndexShard.completionStats(IndexShard.java:743) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:207) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:163) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:433) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:412) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:399) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.5.0.jar:5.5.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.5.0.jar:5.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]

I analyzed the HeapDump and found the org.apache.lucene.index.SegmentReader to be the main suspect:

Can anyone help me out with this?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.