TranslogException [Failed to write operation ]

Hi All,

I see below error logs in ES, can you please guide why one might get the below exception

I am using ES 2.3.3 Heap size is 6G

[ep_fo_people_search_e92pbg01][[ep_fo_people_search_e92pbg01][1]] TranslogException[Failed to write operation [Create{id='http://plef4001.us.com:8000/psc/e92pbg01x_newwin/EMPLOYEE/E92PBG01/c/FO_EMPLOYEE.FO_EMPLOYEE.GBL?Page=FO_EMP_PERS_DATA1&Action=U&EMPLID=XE009100000', type='ep_fo_people_search_e92pbg01'}]]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.translog.Translog.add(Translog.java:557)
at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:440)
at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:378)
at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:349)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:545)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:237)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:326)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardUpdateOperation(TransportShardBulkAction.java:389)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:191)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:639)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:271)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

OutOfMemoryError: Java heap space

Not enough HEAP available apparently.

Why? No idea without knowing anything about your cluster.

Hi,

I have a single node ES 2.3.2 cluster with 6G Heap running on Redhat 6.x machine. I am indexing docs using bulk api. Using curl 50 parallel bulk requests are hitting ES and each of size 12MB-15MB approx. after indexing around 1.7 million docs, i see Java OOM issue and I see the below info statements in logs

java.lang.OutOfMemoryError: Java heap space
[2017-05-31 20:15:31,535][INFO ][monitor.jvm ] [pleej09_1] [gc][old][17986][7548] duration [5.2s], collections [1]/[5.5s], total [5.2s]/[11.1h], memory [5.7gb]->[5.7gb]/[5.8gb], all_pools {[young] [1.4gb]->[1.4gb]/[1.4gb]}{[survivor] [173.4mb]->[150.1mb]/[191.3mb]}{[old] [4.1gb]->[4.1gb]/[4.1gb]}
[2017-05-31 20:15:42,188][INFO ][monitor.jvm ] [pleej09_1] [gc][old][17988][7550] duration [5.3s], collections [1]/[5.7s], total [5.3s]/[11.1h], memory [5.7gb]->[5.7gb]/[5.8gb], all_pools {[young] [1.4gb]->[1.4gb]/[1.4gb]}{[survivor] [157.8mb]->[167.2mb]/[191.3mb]}{[old] [4.1gb]->[4.1gb]/[4.1gb]}

when I look into stats I see the below values in GC section

timestamp: 1496287486120,
uptime_in_millis: 56616425,
mem: {
heap_used_in_bytes: 6140054600,
heap_used_percent: 98,
heap_committed_in_bytes: 6241845248,
heap_max_in_bytes: 6241845248,
non_heap_used_in_bytes: 125976480,
non_heap_committed_in_bytes: 128987136,
pools: {
young: {
used_in_bytes: 1605304320,
max_in_bytes: 1605304320,
peak_used_in_bytes: 1605304320,
peak_max_in_bytes: 1605304320
},
survivor: {
used_in_bytes: 99034656,
max_in_bytes: 200605696,
peak_used_in_bytes: 200605696,
peak_max_in_bytes: 200605696
},
old: {
used_in_bytes: 4435935232,
max_in_bytes: 4435935232,
peak_used_in_bytes: 4435935232,
peak_max_in_bytes: 4435935232
}
}
}

My question is why GC is not releasing memory and memory is growing continuously. Apart from indexing, no other activities being carried out on ES.

Also I use mapper attachment plugin in my ES as my docs contains attachments.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.