Using elasticsearch-spark to write data to elasticsearch throws exception


(hanbj) #1

Hi, all.
Using elasticsearch-spark to write data to elasticsearch throws exception: There are 8 nodes in my cluster. Through monitoring, we see that the pressure of the cluster is not large and the machine load is good.

The exception information is as follows:

Job aborted due to stage failure: Task 7 in stage 0.0 failed 4 times, most recent failure: Lost task 7.3 in stage 0.0 (TID 32, slave777-prd3.sss.com, executor 10): org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[10.247.24.51:9100, 10.101.10.131:9100, 10.247.24.52:9100, 10.101.10.130:9100, 10.247.24.55:9100, 10.247.24.54:9100, 10.101.10.132:9100, 10.247.24.53:9100]] +details
Job aborted due to stage failure: Task 7 in stage 0.0 failed 4 times, most recent failure: Lost task 7.3 in stage 0.0 (TID 32, slave777-prd3.sss.com, executor 10): org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[10.0.2.51:9100, 10.0.10.13:9100, 10.0.2.3:9100, 10.1.10.130:9100]]
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:


(system) #2

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.