Cannot detect ES version - using spark

I am using and AWS Glue Job to use a Spark cluster and trying to connect to an elasticsearch node.

val esHost = "10.196.67.240"
val esPort = "9200"
val indexName = "xxx"

val sparkConfig = new SparkConf().set("es.nodes", esHost).set("es.port", esPort).set("es.nodes.wan.only", "true")
val sparkContext: SparkContext = new SparkContext(sparkConfig)
val glueContext: GlueContext = new GlueContext(sparkContext)
val args = GlueArgParser.getResolvedOptions(sysArgs, Seq("JOB_NAME").toArray)
Job.init(args("JOB_NAME"), glueContext, args.asJava)

val dataSource = ...
val mappedData = ...

val sqlContext = new SQLContext(sparkContext)
val outputDf = mappedData.toDF()
outputDf.saveToEs(indexName) 

Job.commit()

ERROR
Exception in User Class: org.elasticsearch.hadoop.ESHadoopIllegalArgumentException : Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/cloud instance without the proper setting es.nodes.wan.only

Using ES version 7.2.0 on node
Using JAR - elasticsearch-spark-20_2.11-7.2.0 (I've read the versions have to match)
ES node is an EC2 instance inside the same VPC as Glue.
I've also tried using the fully qualified name of the server.

I've read some other posts on similar problems but all the answers given are generic crappy answers like "it's a connection issue" and then no one posts the solution if there ever was one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.