Garbage collection not kicking in - Heap is growing to 98%

(SK) #1


I have a single node ES 2.3.2 cluster with 6G Heap running on Redhat 6.x machine. I am indexing docs using bulk api. Using curl 50 parallel bulk requests are hitting ES and each of size 12MB-15MB approx. after indexing around 1.7 million docs, i see Java OOM issue and I see the below info statements in logs

java.lang.OutOfMemoryError: Java heap space
[2017-05-31 20:15:31,535][INFO ][monitor.jvm ] [pleej09_1] [gc][old][17986][7548] duration [5.2s], collections [1]/[5.5s], total [5.2s]/[11.1h], memory [5.7gb]->[5.7gb]/[5.8gb], all_pools {[young] [1.4gb]->[1.4gb]/[1.4gb]}{[survivor] [173.4mb]->[150.1mb]/[191.3mb]}{[old] [4.1gb]->[4.1gb]/[4.1gb]}
[2017-05-31 20:15:42,188][INFO ][monitor.jvm ] [pleej09_1] [gc][old][17988][7550] duration [5.3s], collections [1]/[5.7s], total [5.3s]/[11.1h], memory [5.7gb]->[5.7gb]/[5.8gb], all_pools {[young] [1.4gb]->[1.4gb]/[1.4gb]}{[survivor] [157.8mb]->[167.2mb]/[191.3mb]}{[old] [4.1gb]->[4.1gb]/[4.1gb]}

when I look into stats I see the below values in GC section

timestamp: 1496287486120,
uptime_in_millis: 56616425,
mem: {
heap_used_in_bytes: 6140054600,
heap_used_percent: 98,
heap_committed_in_bytes: 6241845248,
heap_max_in_bytes: 6241845248,
non_heap_used_in_bytes: 125976480,
non_heap_committed_in_bytes: 128987136,
pools: {
young: {
used_in_bytes: 1605304320,
max_in_bytes: 1605304320,
peak_used_in_bytes: 1605304320,
peak_max_in_bytes: 1605304320
survivor: {
used_in_bytes: 99034656,
max_in_bytes: 200605696,
peak_used_in_bytes: 200605696,
peak_max_in_bytes: 200605696
old: {
used_in_bytes: 4435935232,
max_in_bytes: 4435935232,
peak_used_in_bytes: 4435935232,
peak_max_in_bytes: 4435935232

My question is why GC is not releasing memory and memory is growing continuously. Apart from indexing, no other activities being carried out on ES.

Also I use mapper attachment plugin in my ES as my docs contains attachments.

Kindly help

(SK) #2


As I am using mapper attachment plugin to parse my attachments do you think attachment parsing using tika might be the reason for so much of heap consumption? also i see below error

[ep_fo_people_search_e92pbg01][[ep_fo_people_search_e92pbg01][1]] TranslogException[Failed to write operation [Create{id='', type='ep_fo_people_search_e92pbg01'}]]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.translog.Translog.add(
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(
at org.elasticsearch.index.engine.InternalEngine.innerCreate(
at org.elasticsearch.index.engine.InternalEngine.create(
at org.elasticsearch.index.shard.IndexShard.create(
at org.elasticsearch.index.engine.Engine$Create.execute(
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardUpdateOperation(
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(
at org.elasticsearch.transport.TransportService$4.doRun(
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: java.lang.OutOfMemoryError: Java heap space

(SK) #3

Can you please guide /provide some pointers

(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.