While executing the Databricks notebook to save data in Elasticsearch cloud if you are getting below error:
“org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.”
To solve this issue you have to pass the credential along with the URL of Elasticsearch.
val esURL = "https://elastic-url"
val df = spark.read.option("header","true").csv("/mnt/akc_breed_info.csv")
df.write
.format("org.elasticsearch.spark.sql")
.option("es.nodes.wan.only","true")
.option("es.port","443")
.option("es.net.ssl","true")
.option("es.net.http.auth.user","uuuuu")
.option("es.net.http.auth.pass","ppppp")
.option("es.nodes", esURL)
.mode("Overwrite")
.save("index/dogs")
If you still face any issue please let me know.