I have a multi-core system with 32 GB of RAM and 16GB is assigned to the ES_HEAP_SIZE. The problem is that when i am trying to upload a file of 1GB it just fails with out of memory java heap space error. I am not sure what is going wrong. The JDK version is 1.8.
I am ready to provide more info. if required but i seriously don't know what's wrong.
[2017-11-04 23:37:37,964][WARN ][index.indexing.slowlog.index] [dev-09-data01][foxes_test][3] [FAILED toString()] [2017-11-04 23:38:49,035][WARN ][action.index ] [dev-09-data01] [foxes_test][3] failed to perform indices:data/write/index[r] on node {dev-09-data03}{3ArHcK5IRqq2GmDFqcFloA}{10.31.11.88}{10.31.11.88:9300} RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; Caused by: java.lang.OutOfMemoryError: Java heap space at java.lang.StringCoding.decode(StringCoding.java:215) at java.lang.String.<init>(String.java:463) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:122) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:97) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:92) at org.elasticsearch.action.index.IndexRequest.toString(IndexRequest.java:744) at org.elasticsearch.action.support.replication.ReplicationRequest.getDescription(ReplicationRequest.java:260) at org.elasticsearch.action.support.replication.ReplicationRequest.createTask(ReplicationRequest.java:237) at org.elasticsearch.action.support.ChildTaskActionRequest.createTask(ChildTaskActionRequest.java:68) at org.elasticsearch.tasks.TaskManager.register(TaskManager.java:67) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:71) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:293) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2017-11-04 23:38:49,036][WARN ][cluster.action.shard ] [dev-09-data01] [foxes_test][3] received shard failed for target shard [[foxes_test][3], node[3ArHcK5IRqq2GmDFqcFloA], [R], v[3], s[STARTED], a[id=O5qlSKzTSkeZcoVcTb2_ng]], indexUUID [jytMiko5TL6MbhOlBncuOg], message [failed to perform indices:data/write/index on replica on node {dev-09-data03}{3ArHcK5IRqq2GmDFqcFloA}{10.31.11.88}{10.31.11.88:9300}], failure [RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; ] RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; Caused by: java.lang.OutOfMemoryError: Java heap space at java.lang.StringCoding.decode(StringCoding.java:215) at java.lang.String.<init>(String.java:463) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:122) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:97) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:92) at org.elasticsearch.action.index.IndexRequest.toString(IndexRequest.java:744) at org.elasticsearch.action.support.replication.ReplicationRequest.getDescription(ReplicationRequest.java:260) at org.elasticsearch.action.support.replication.ReplicationRequest.createTask(ReplicationRequest.java:237) at org.elasticsearch.action.support.ChildTaskActionRequest.createTask(ChildTaskActionRequest.java:68) at org.elasticsearch.tasks.TaskManager.register(TaskManager.java:67) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:71) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:293) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
I do not see any error in ES logs.
What version are you on?
2.4.1
What OS?
Ubuntu 16.04.2 LTS
Why is the file so big?
This is a web press data and hence they are large in size.
Yeah that is a good approach but what i do not understand and correct me if i am wrong. If i have allocated a decent memory to the heap size ( 16G) and trying to upload a file of 1G shouldn't it work fine without issues. Or this is a limitation of some sort.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.