How to update http.max_content_length in elastic cloud cluster


(Satendra Kumar ) #1

I am using an elastic cloud cluster. getting error:

Job aborted due to stage failure: Task 1782 in stage 1.0 failed 4 times, most recent failure: Lost task 1782.3 in stage 1.0 (TID 2191, 10.193.89.157, executor 6): org.apache.spark.util.TaskCompletionListenerException: [PUT] on [data_the_284_ingest/my_data/_bulk] failed; server[...] returned [413|Request Entity Too Large:]
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:153)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:131)
    at org.apache.spark.scheduler.Task.run(Task.scala:128)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

is any way to update setting http.max_content_length ?


(Christian Dahlqvist) #2

Changing that would probably require you to get in contact with Elastic Cloud support. I would however start by looking into why you are sending such large requests. There is a reason the limit is in place as handling very large requests can consume a lot of resources and cause instability.


(James) #3

Don't think it can be updated on ECE at the moment.


(system) closed #4

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.