Elastic Search [5.3] throwing exception when 10 concurrent threads are trying to index large documents

Hi,
We are using Elastic Search 5.3 version and elastic search rest client for Java with the same version. We have a cluster of 3 nodes.
Below are config for rest client-
maxConnsPerRoute=100
maxConns=200

We have big documents of around 23 MB (in JSON) to persist in one of the index. While persisting such documents concurrently we are getting some weird exceptions -

HTTP/1.1 400 Bad Request
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"i_o_exception","reason":"Unexpected character ('e' (code 101)): was expecting a colon to separate field name and value\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@5286e42d; line: 1, column: 4849631]"}},"status":400}

This exception comes while 8-10 concurrent threads tries to index such documents in ES but with 5 concurrent threads it worked fine.

The more weird part is we get this exception but still the document is indexed/created in ES. So just want to understand if we need some throttling based on some sizing while indexing the big documents or what are the best practices while trying to index such big documents in Elastic Search.

Please let me know in case you need any other information that I can help with.

This is something urgent and haunting us from past few days.

Thanks in advance,
Neeraj Singhal

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.