Error job spark streaming elasticsearch

hello,
I am getting error while using kafka ---> job streaming spark -----> save to Elasticsearch

18/01/17 22:11:44 INFO HttpMethodDirector: Retrying request
18/01/17 22:13:51 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out
18/01/17 22:13:51 INFO HttpMethodDirector: Retrying request
18/01/17 22:15:58 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out
18/01/17 22:15:58 INFO HttpMethodDirector: Retrying request
18/01/17 22:18:05 ERROR NetworkClient: Node [pcyyykk7:29213] failed (Connection timed out); selected next node [pcyyykk6:29213]
18/01/17 22:20:12 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out
18/01/17 22:20:12 INFO HttpMethodDirector: Retrying request
18/01/17 22:22:20 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out
18/01/17 22:22:20 INFO HttpMethodDirector: Retrying request
18/01/17 22:24:27 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Connection timed out
18/01/17 22:24:27 INFO HttpMethodDirector: Retrying request
18/01/17 22:26:34 ERROR NetworkClient: Node [pcyyykk6:29213] failed (Connection timed out); no other nodes left - aborting...
18/01/17 22:26:34 INFO JobScheduler: Finished job streaming job 1516039320000 ms.0 from job set of time 1516039320000 ms
18/01/17 22:26:34 INFO JobScheduler: Total delay: 185074.698 s for time 1516039320000 ms (execution: 1526.998 s)
18/01/17 22:26:34 INFO JobScheduler: Starting job streaming job 1516039350000 ms.0 from job set of time 1516039350000 ms
18/01/17 22:26:34 ERROR JobScheduler: Error running job streaming job 1516039320000 ms.0
org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[pcyyy
kk5:29213, pcyyykk7:29213, pcyyykk6:29213]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:150)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:444)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:452)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:519)
at org.elasticsearch.hadoop.rest.RestRepository.indexExists(RestRepository.java:388)
at org.elasticsearch.hadoop.rest.InitializationUtils.doCheckIndexExistence(InitializationUtils.java:278)
at org.elasticsearch.hadoop.rest.InitializationUtils.checkIndexExistence(InitializationUtils.java:269)
at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:100)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEsWithMeta(EsSpark.scala:85)
at org.elasticsearch.spark.rdd.EsSpark$.saveToEsWithMeta(EsSpark.scala:80)
at org.elasticsearch.spark.rdd.api.java.JavaEsSpark$.saveToEsWithMeta(JavaEsSpark.scala:58)
at org.elasticsearch.spark.rdd.api.java.JavaEsSpark.saveToEsWithMeta(JavaEsSpark.scala)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$3.apply(JavaDStreamLike.scala:335)
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$3.apply(JavaDStreamLike.scala:335)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:227)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:227)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:227)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:226)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Looks like the issue is detailed in the logs for each node. Seems that the connector is not receiving a response back from the server in a reasonable time and has timed out. Looks like that's the case from each node in your cluster. Have you checked your network for any issues that may be keeping the connector from reaching the nodes or checked the nodes to see if they're overloaded?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.