[es-XXXX] [XXXXXXXX][1] failed to execute multi_get for [6]/[XXXXXXX] java.lang.OutOfMemoryError: Java heap space

[2015-12-30 13:50:13,088][INFO ][monitor.jvm ] [es-v2-23] [gc][young][1179360][5062] duration [960ms], collections [1]/[1.5s], total [960ms]/[1.9m], memory [3.4gb]->[2.7gb]/[5.8gb], all_pools {[young] [1gb]->[1.1mb]/[1.2gb]}{[survivor] [128mb]->[128mb]/[128mb]}{[old] [2.2gb]->[2.6gb]/[4.5gb]}
[2015-12-30 14:15:16,936][DEBUG][action.get ] [es-XXX] [XXXXXX][1] failed to execute multi_get for [6]/[178901]
[2015-12-30 14:15:16,936][DEBUG][action.get ] [es-XXXX] [XXXXXXXX][1] failed to execute multi_get for [6]/[178901]
java.lang.OutOfMemoryError: Java heap space

Hi can any one suggest y the out of memory occured ?

Yes? You've run out of JVM heap, so either increase the capacity of your cluster, reduce the amount of data, scale back the complexity of the queries, or enable doc values if you haven't already. Without further details exact suggestions are impossible.

Hi requesting you to contirm how u arrive that out of JVM heap ....

We got this error from 5 node cluster and in sequential mget query thrown error and node crashed

Also guide us that how we can avoid OOM error

Can you please provide some details about what you are doing when you get the out of memory error? How large are your multi_get requests? What is the size of your documents? How many concurrent requests are you sending? What is the specification of the cluster?

In order to identify the amount of heap being used and visualise the load on the cluster I would recommend installing Marvel.

Java heap size is 5.8gb
[jalbertemmanuel@yamuna21 ~]$ curl 'http://************/_cat/nodes?v&h=n,d,hp,hm,rm,fm,fe'
n d hp hm rm fm fe
es-v1-XX 1.1tb 60 5.8gb 252.2gb 171.3mb 0
es-v1-XX 4.4tb 43 5.8gb 251.3gb 147.7mb 0
es-v1-XX 4.4tb 59 5.8gb 251.3gb 168.1mb 0
es-v1-XX 4.2tb 58 5.8gb 251.3gb 114.2mb 0
es-v1-XX 4.2tb 14 5.8gb 252.1gb 112mb 0

First No mapping occured and leads to mget to fail with out of memory error

[2015-12-30 12:58:22,923][WARN ][index.codec ] [] no index mapper found for field: [agreementTypes.firmId] returning default postings format
[2015-12-30 12:58:22,923][WARN ][index.codec ] [] no index mapper found for field: [agreementTypes.id] returning default postings format
[2015-12-30 12:58:22,923][WARN ][index.codec ] ] no index mapper found for field: [agreementTypes.lastUpdateDate] returning default postings format
[2015-12-30 12:58:22,924][WARN ][index.codec ] [ no index mapper found for field: [agreementTypes.lastUpdatedBy] returning default postings format
[2015-12-30 12:58:22,924][WARN ][index.codec ] [2] no index mapper found for field: [agreementTypes.moduleId] returning default postings format
[2015-12-30 12:58:22,924][WARN ][index.codec ] [] no index mapper found for field: [agreementTypes.moduleName] returning default postings format
[2015-12-30 12:58:22,924][WARN ][index.codec ] [] no index mapper found for field: [agreementTypes.startDate]

[2015-12-30 13:50:13,088][INFO ][monitor.jvm ] [] [gc][young][1179360][5062] duration [960ms], collections [1]/[1.5s], total [960ms]/[1.9m], memory [3.4gb]->[2.7gb]/[5.8gb], all_pools {[young] [1gb]->[1.1mb]/[1.2gb]}{[survivor] [128mb]->[128mb]/[128mb]}{[old] [2.2gb]->[2.6gb]/[4.5gb]}
[2015-12-30 14:15:16,936][DEBUG][action.get ] [] [][1] failed to execute multi_get for [6]/[]
[2015-12-30 14:15:16,936][DEBUG][action.get ] ][1] failed to execute multi_get for [6]/[]
java.lang.OutOfMemoryError: Java heap space

[2015-12-30 14:17:42,628][INFO ][index.search.slowlog.query] ] [][3] took[6.9s], took_millis[6940], types[59], stats[], search_type[QUERY_AND_FETCH], total_shards[1], source[{"explain":false,"facets":{"f":{"terms":{"field":"assignedObjectRefId","size":10000000},"facet_filter":{"and":{"filters":[{"term":{"firmId"}},{"term":{"deleted":"0"}},{"or":{"filters":[{"term":{"isActive":"Y"}},{"missing":{"field":"isActive","null_value":true,"existence":true}}]}},{"term":{"assignedObjectId":91}},{"or":{"filters":[]}}]}}}}}], extra_source[],
[2015-12-30 14:19:06,808][INFO ][index.search.slowlog.query] [] [][1] took[6.9s], took_millis[6936], types[168], stats[], search_type[QUERY_AND_FETCH], total_shards[1], source[{"from":0,"size":15,"post_filter":{"and":{"filters":[{"terms":{"objectStatus":["0"]}},{"term":{"firmId":}},{"query":{"terms":{"_all":["customize","fields","in","free","version"]}}},{"or":{"filters":[]}}]}},"explain":false,"sort":[{"creationDate":{"order":"desc"}}]}], extra_source[],

That does unfortunately not really answer the questions I asked. It however looks like you have a query with a very large size value set, which can cause a large amount of heap to be used. This blog post provides additional details around things to avoid doing to your Elasticsearch cluster.

Please don't create multiple threads for the same thing as it makes it really difficult to help, you already have Multi_get throws error here so let's use that.