Spark to AWS ElasticSearch Service

I am running spark on my local machine. I have Elastic Search up and running in AWS-ElasticSearch service. I am trying to follow the documentation specified here:

Version of Elasticsearch-spark that I am using is,


This is how my SparkConf looks like:

SparkConf conf = new SparkConf().setMaster("local[*]").setAppName(properties.getProperty(""))
                .set("es.nodes", "search-**********")
                .set("es.http.timeout", "5m")
                .set("es.nodes.wan.only", "true");

# Call the method to send logs to ES, assume stringResults to a JavaDStream<Map<String, Object>> object

This is how I am trying to store the data in ElasticSearch

import static;

public class ElasticSearchManager {

    public static void sendToEs(JavaDStream<Map<String, Object>> javaDStream) {
        ZonedDateTime dateTime =;


This is the error I get

org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:104)
    at org.elasticsearch.spark.streaming.EsSparkStreaming$anonfun$doSaveToEs$1.apply(EsSparkStreaming.scala:71)
    at org.elasticsearch.spark.streaming.EsSparkStreaming$anonfun$doSaveToEs$1.apply(EsSparkStreaming.scala:71)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2(DStream.scala:628)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2$adapted(DStream.scala:628)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$2(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$1(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$
    at scala.util.Try$.apply(Try.scala:213)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.$anonfun$run$1(JobScheduler.scala:257)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
    at org.apache.spark.streaming.scheduler.JobScheduler$
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
Caused by: [GET] on [] failed; server[search-************] returned [400|Bad Request:]
    ... 19 more

I tried to debug as to what's the issue. And, this is what I found in package in line 745

Map<String, Object> result = get("", null);

Not sure why they would set the URI in the get method to empty string. Now I am struck at this point and don't have a good path forward. Any help would be appreciated.

That method is performing the "main" action on the cluster which should return the cluster's name, uuid, and version number. ES-Hadoop does this action first to determine which api's and features are available for it to use. Most likely this is an issue with how AWS Elasticsearch accepts traffic to the cluster.