Hi,
I am trying to do a remote reindex a document which is of size >100mb but it is failing with the below exception
error={type=illegal_argument_exception, reason=Remote responded with a chunk that was too large. Use a smaller batch size., caused_by={type=content_too_long_exception, reason=entity content is too long [185463385] for the configured buffer limit [104857600]}}}
I even tried increasing the http.max_content_length to 500MB but still, it is failing with the same exception. Is there any way to increase this limit or is there any workaround to reindex such documents.
Let's start by verifying that it is applied correctly: Can you post the relevant output of the query GET /_cluster/settings?include_defaults=true please? This isn't a dynamic setting and must be set in the YML configuration on all hosts.
BUT the bigger question is why you need this. A single document with more than a 100MB sounds like something is very wrong. I'd strongly suggest to restructure your data instead of changing the setting, which is generally a good protection for your cluster.
Hi xeraa,
I have attached the response of the settings call here.
I understand that documents of size 100 MB is not advisable. But when we have docs like 10-20MB, we are forced to use the batch size less than 5 during bulk reindexing. With this batch size, reindexing a huge index takes more time. So we wanted to increase this 100MB limit to a reasonably bigger number. Please advise
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.