I am attempting your example in the Spark support for Elasticsearch document page.
https://www.elastic.co/guide/en/elasticsearch/hadoop/master/spark.html
import org.elasticsearch.spark.rdd.Metadata._
val otp = Map("iata" -> "OTP", "name" -> "Otopeni")
val muc = Map("iata" -> "MUC", "name" -> "Munich")
val sfo = Map("iata" -> "SFO", "name" -> "San Fran")
// metadata for each document
// note it's not required for them to have the same structure
val otpMeta = Map(ID -> 1, TTL -> "3h")
val mucMeta = Map(ID -> 2, VERSION -> "23")
val sfoMeta = Map(ID -> 3)
// instance of SparkContext
val sc = ...
val airportsRDD = sc.makeRDD(Seq((otpMeta, otp), (mucMeta, muc), (sfoMeta, sfo)))
pairRDD.saveToEsWithMeta(airportsRDD, "airports/2015")
I cannot get it to compile as the pairRDD object is not found
[error] /testspark/src/main/scala/testspark.scala:43: not found: value PairRDD
[error] PairRDD.saveToEsWithMeta(airportsRDD, "airports/2015")
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 31 s, completed Mar 15, 2016 3:44:46 PM
I'm using sbt and here is my file:
name := "testspark"
version := "1.0"
scalaVersion := "2.10.6"
libraryDependencies += "org.elasticsearch" % "elasticsearch-spark_2.10" % "2.2.0"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0"
I've also tried spark 1.5.2 with same results.
note: I do have a spark context defined that I've used for your other earlier examples.
I must be missing something somewhere I'm sure.
Thanks,