Hello,
I'm using Spark 2.0 with Spark Elasticsearch version 5.0.2 (Scala 2.11):
"org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.0.2"
When trying to read from an index I'm getting the following error:
scala.MatchError: Buffer() (of class scala.collection.convert.Wrappers$JListWrapper)
This is how I'm getting the data (session is an instance of SparkSession):
session.sqlContext.read.format("org.elasticsearch.spark.sql").options(opt).load(esIndexName)
opt is a Map of options:
Map("es.read.field.as.array.include" -> "fieldNames",
"es.input.json" -> "true",
"es.field.read.empty.as.null" -> "true",
"es.index.read.missing.as.empty" -> "true",
"es.read.field.exclude" -> excludedFields)
Field is question is not in an array, it's a nested object field:
a {
b {
c = "string value"
}
}
I found a similar issue here:
But after upgrading to the latest spark-elasticsearch library version I'm still seeing the problem.
I would appreciate any suggestions on how to fix this.
Thanks,
M
Spark stack trace:
WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, host): scala.MatchError: Buffer() (of class scala.collection.convert.Wrappers$JListWrapper)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:296)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:103)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:261)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:251)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:103)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:261)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:251)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:103)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$ArrayConverter$$anonfun$toCatalystImpl$2.apply(CatalystTypeConverters.scala:164)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
<<<