Out_of_memory_error reason Java heap space

Hi Everyone,

I have a multi-core system with 32 GB of RAM and 16GB is assigned to the ES_HEAP_SIZE. The problem is that when i am trying to upload a file of 1GB it just fails with out of memory java heap space error. I am not sure what is going wrong. The JDK version is 1.8.

I am ready to provide more info. if required but i seriously don't know what's wrong.

This is the command i am running.

curl -XPOST localhost:9200/foxes_test/all_test/1 -d @dc1a1254-28c4-4aea-8b69-df359bb64891 --header "Content-Type: application/json" 

{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[dev-09-data03][10.31.10.155:9300][indices:data/write/index]"}],"type":"out_of_memory_error","reason":"Java heap space"},"status":500}

My ES Config:

---
action.disable_delete_all_indices: true
bootstrap.mlockall: true
cluster.name: data
discovery.ec2.tag.aws:cloudformation:stack-name: dev-09
discovery.type: ec2
gateway.expected_nodes: 3
gateway.recover_after_nodes: 2
gateway.recover_after_time: 2m
http.max_content_length: 2147483647b
http.max_initial_line_length: 50kb
index.indexing.slowlog.threshold.index.warn: 10s
index.mapping.attachment.indexed_chars: -1
index.merge.scheduler.max_thread_count: 1
index.search.slowlog.threshold.query.warn: 10s
index.translog.flush_threshold_size: 500mb
indices.breaker.fielddata.limit: 60%
indices.breaker.request.limit: 40%
indices.breaker.total.limit: 70%
indices.cache.filter.size: 15%
indices.fielddata.cache.size: 20%
indices.memory.index_buffer_size: 20%
indices.memory.min_index_buffer_size: 96mb
indices.memory.min_shard_index_buffer_size: 12mb
network.host: 0.0.0.0
node.name: dev-09-data01
path.data: "/opt/elasticsearch/data/dev-09-data01"
path.logs: "/var/log/elasticsearch/dev-09-data01"
script.indexed: true
script.inline: true

Please help, i am really stuck at this.

--
Niraj

What is the actual error?
What version are you on?
What OS?
Why is the file so big?

Unless you have a good reason for doing so, I wouldn't change those.

Hi Mark,

Answering your question:

What is the actual error?

Error in ES Log

[2017-11-04 23:37:37,964][WARN ][index.indexing.slowlog.index] [dev-09-data01][foxes_test][3] [FAILED toString()] [2017-11-04 23:38:49,035][WARN ][action.index ] [dev-09-data01] [foxes_test][3] failed to perform indices:data/write/index[r] on node {dev-09-data03}{3ArHcK5IRqq2GmDFqcFloA}{10.31.11.88}{10.31.11.88:9300} RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; Caused by: java.lang.OutOfMemoryError: Java heap space at java.lang.StringCoding.decode(StringCoding.java:215) at java.lang.String.<init>(String.java:463) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:122) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:97) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:92) at org.elasticsearch.action.index.IndexRequest.toString(IndexRequest.java:744) at org.elasticsearch.action.support.replication.ReplicationRequest.getDescription(ReplicationRequest.java:260) at org.elasticsearch.action.support.replication.ReplicationRequest.createTask(ReplicationRequest.java:237) at org.elasticsearch.action.support.ChildTaskActionRequest.createTask(ChildTaskActionRequest.java:68) at org.elasticsearch.tasks.TaskManager.register(TaskManager.java:67) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:71) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:293) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2017-11-04 23:38:49,036][WARN ][cluster.action.shard ] [dev-09-data01] [foxes_test][3] received shard failed for target shard [[foxes_test][3], node[3ArHcK5IRqq2GmDFqcFloA], [R], v[3], s[STARTED], a[id=O5qlSKzTSkeZcoVcTb2_ng]], indexUUID [jytMiko5TL6MbhOlBncuOg], message [failed to perform indices:data/write/index on replica on node {dev-09-data03}{3ArHcK5IRqq2GmDFqcFloA}{10.31.11.88}{10.31.11.88:9300}], failure [RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; ] RemoteTransportException[[dev-09-data03][10.31.11.88:9300][indices:data/write/index[r]]]; nested: OutOfMemoryError[Java heap space]; Caused by: java.lang.OutOfMemoryError: Java heap space at java.lang.StringCoding.decode(StringCoding.java:215) at java.lang.String.<init>(String.java:463) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:122) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:97) at org.elasticsearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:92) at org.elasticsearch.action.index.IndexRequest.toString(IndexRequest.java:744) at org.elasticsearch.action.support.replication.ReplicationRequest.getDescription(ReplicationRequest.java:260) at org.elasticsearch.action.support.replication.ReplicationRequest.createTask(ReplicationRequest.java:237) at org.elasticsearch.action.support.ChildTaskActionRequest.createTask(ChildTaskActionRequest.java:68) at org.elasticsearch.tasks.TaskManager.register(TaskManager.java:67) at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:71) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:293) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

I do not see any error in ES logs.

What version are you on?

2.4.1

What OS?

Ubuntu 16.04.2 LTS

Why is the file so big?

This is a web press data and hence they are large in size.

I would split the file up into smaller requests, I think that's causing the OOM.

Yeah that is a good approach but what i do not understand and correct me if i am wrong. If i have allocated a decent memory to the heap size ( 16G) and trying to upload a file of 1G shouldn't it work fine without issues. Or this is a limitation of some sort.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.