I am using an elastic cloud cluster. getting error:
Job aborted due to stage failure: Task 1782 in stage 1.0 failed 4 times, most recent failure: Lost task 1782.3 in stage 1.0 (TID 2191, 10.193.89.157, executor 6): org.apache.spark.util.TaskCompletionListenerException: [PUT] on [data_the_284_ingest/my_data/_bulk] failed; server[...] returned [413|Request Entity Too Large:]
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:153)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:131)
at org.apache.spark.scheduler.Task.run(Task.scala:128)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
is any way to update setting http.max_content_length ?