I'm a new user, and I'm running into this error. Indeed the field it's complaining about is an array.
I'm running elasticsearch-hadoop-2.2.0-rc1
I saw this topic on the error and attempted to use the fix with no improvement. That could most definitely be some newbie mistake I am making.
I am attempting to set the es.read.field.as.array.include option as follows (in spark-shell):
val options = Map("es.read.field.as.array.include" -> "*addr_geo.*.geoname_id") var flows = sqlContext.read.format("org.elasticsearch.spark.sql").options(options).load("flows-*/flow_log") flows.show
But observe no change in behavior. Is there some other way I should set this option?
As a minor comment, it would be helpful if the documentation of these options included an example in scala. I can see what the option is and what value it should have, but it's not clear to me where to set it.