Elastic 5.6.14 cluster goes down with out of heap space error

We have an index setup in 2.4 that we restored it into a new 5.6 cluster (3 master 8 data). Same query works on 2.4 but fails on 5.6 with following error:
[2019-03-25T19:21:53,362][WARN ][o.e.m.j.JvmGcMonitorService] [gc][175] overhead, spent [580ms] collecting in the last [1s]
....
[2019-03-25T19:22:31,410][INFO ][o.e.m.j.JvmGcMonitorService] [gc][old][185][4] duration [8s], collections [1]/[8.2s], total [8s]/[29s], memory [29.4gb]->[29.9gb]/[29.9gb], all_pools {[young] [55.9mb]->[532.5mb]/[532.5mb]}{[survivor] [0b]->[58.2mb]/[66.5mb]}{[old] [29.3gb]->[29.3gb]/[29.3gb]}
[2019-03-25T19:22:31,411][WARN ][o.e.m.j.JvmGcMonitorService] [gc][185] overhead, spent [8s] collecting in the last [8.2s]
[2019-03-25T19:22:41,466][WARN ][o.e.m.j.JvmGcMonitorService] [gc][old][186][5] duration [10s], collections [1]/[10s], total [10s]/[39s], memory [29.9gb]->[29.9gb]/[29.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [58.2mb]->[65.5mb]/[66.5mb]}{[old] [29.3gb]->[29.3gb]/[29.3gb]}
[2019-03-25T19:26:35,632][WARN ][o.e.m.j.JvmGcMonitorService] [gc][186] overhead, spent [10s] collecting in the last [10s]
[2019-03-25T19:26:35,658][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:184)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:84)
....
[2019-03-25T19:26:35,663][INFO ][o.e.d.z.ZenDiscovery ] master_left [{NOXDcaUEROeM74Ow5EKZbg}{LuBjFnOPTJ6g5a2qsYbvUw}{...}{...:9300}{aws_availability_zone=us-west-2a}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2019-03-25T19:26:35,659][ERROR][o.e.t.n.Netty4Utils ] fatal error on the network layer
at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:184)
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:84)
....
....
[2019-03-25T19:26:35,679][WARN ][o.e.d.z.ZenDiscovery ] master left (reason = failed to ping, tried [3] times, each with maximum [30s] timeout), current nodes: nodes:
[2019-03-25T19:26:35,671][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] fatal error in thread [Thread-5], exiting
java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:481) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:490) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.(HyperLogLogPlusPlus.java:171) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.readFrom(HyperLogLogPlusPlus.java:538) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.metrics.cardinality.InternalCardinality.(InternalCardinality.java:51) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.SearchModule$$Lambda$858/1082640380.read(Unknown Source) ~[?:?]
at org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput.readNamedWriteable(NamedWriteableAwareStreamInput.java:46) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput.readNamedWriteable(NamedWriteableAwareStreamInput.java:39) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.lambda$readFrom$1(InternalAggregations.java:94) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations$$Lambda$1848/1229234476.read(Unknown Source) ~[?:?]
at org.elasticsearch.common.io.stream.StreamInput.readList(StreamInput.java:887) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.readFrom(InternalAggregations.java:94) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.readAggregations(InternalAggregations.java:84) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.InternalTerms$Bucket.(InternalTerms.java:86) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket.(StringTerms.java:51) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.StringTerms$$Lambda$1849/1697947181.read(Unknown Source) ~[?:?]
at org.elasticsearch.search.aggregations.bucket.terms.InternalMappedTerms.lambda$new$0(InternalMappedTerms.java:70) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.InternalMappedTerms$$Lambda$1850/88140207.read(Unknown Source) ~[?:?]
at org.elasticsearch.common.io.stream.StreamInput.readList(StreamInput.java:887) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.InternalMappedTerms.(InternalMappedTerms.java:70) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.bucket.terms.StringTerms.(StringTerms.java:111) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.SearchModule$$Lambda$887/2083951216.read(Unknown Source) ~[?:?]
at org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput.readNamedWriteable(NamedWriteableAwareStreamInput.java:46) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput.readNamedWriteable(NamedWriteableAwareStreamInput.java:39) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.lambda$readFrom$1(InternalAggregations.java:94) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations$$Lambda$1848/1229234476.read(Unknown Source) ~[?:?]
at org.elasticsearch.common.io.stream.StreamInput.readList(StreamInput.java:887) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.readFrom(InternalAggregations.java:94) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.aggregations.InternalAggregations.readAggregations(InternalAggregations.java:84) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.query.QuerySearchResult.readFromWithId(QuerySearchResult.java:270) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.search.query.QuerySearchResult.readFrom(QuerySearchResult.java:252) ~[elasticsearch-5.6.14.jar:5.6.14]
at org.elasticsearch.transport.TcpTransport.handleResponse(TcpTransport.java:1439) ~[elasticsearch-5.6.14.jar:5.6.14]

I also noticed that as soon as I fire the query, it starts doing lot of GCs which does not happen on 2.4 version. What could be the reason for this?

Hello Vamehta,

I've encountered similar Java Heap Space - Out of Memory errors. Usually the below two areas are where I first go to check settings as a possible fix.

  1. jvm.options -- xms & xmx settings

  2. limits file - Most installs I have done are Debian so have found been in
    /etc/security/limits.conf

Best Regards & Luck,
John

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.