ES memory issue


(Dazith Kj) #1

When querying date from dashboards I am getting the below exception in ES log. Any idea of how to resolving the issue?

ES version :

{
"name": "Plunderer",
"cluster_name": "elasticsearch",
"cluster_uuid": "9LjKXnnbRD-deirsIlx-zQ",
"version": {
"number": "2.4.6",
"build_hash": "5376dca9f70f3abef96a77f4bb22720ace8240fd",
"build_timestamp": "2017-07-18T12:17:44Z",
"build_snapshot": false,
"lucene_version": "5.5.4"
},
"tagline": "You Know, for Search"
}

[2018-05-10 17:59:50,689][WARN ][indices.breaker.request ] [request] New used memory 476217744 [454.1mb] for data of [<reused_arrays>] would be larger than configured breaker: 422523699 [402.9mb], breaking
[2018-05-10 17:59:50,695][DEBUG][action.search ] [Plunderer] [filebeat-2018.05.10][4], node[6HWLcpmWTDy2RpNhniV7QA], [P], v[2], s[STARTED], a[id=I79U_QLpTtCmsXeRLv3rNg]: Failed to execute [org.elasticsearch.action.search.SearchRequest@269957c2] lastShard [true]
RemoteTransportException[[Plunderer][192.168.200.42:9300][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [422523699/402.9mb]];
Caused by: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [422523699/402.9mb]];
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:409)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:113)
at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:372)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:385)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [422523699/402.9mb]]
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.circuitBreak(ChildMemoryCircuitBreaker.java:97)
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:147)
at org.elasticsearch.common.util.BigArrays.adjustBreaker(BigArrays.java:396)
at org.elasticsearch.common.util.BigArrays.resizeInPlace(BigArrays.java:426)
at org.elasticsearch.common.util.BigArrays.resize(BigArrays.java:472)
at org.elasticsearch.common.util.BigArrays.grow(BigArrays.java:489)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.ensureCapacity(HyperLogLogPlusPlus.java:197)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.collect(HyperLogLogPlusPlus.java:230)
at org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregator$DirectCollector.collect(CardinalityAggregator.java:203)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80)
at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator$2.collect(GlobalOrdinalsStringTermsAggregator.java:130)
at org.elasticsearch.search.aggregations.LeafBucketCollector.collect(LeafBucketCollector.java:88)
at org.apache.lucene.search.MultiCollector$MultiLeafCollector.collect(MultiCollector.java:174)
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:221)
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:821)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:384)
... 12 more


(Christian Dahlqvist) #2

Increase the amount of heap available to the node?


(Dazith Kj) #3

I have found many methods. Can you help me on finding the best way ? :slight_smile:


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.