Hi,
I am using Spark to stream data and write to ES. I have downloaded
elasticsearch-hadoop-2.1.0.BUILD-SNAPSHOT.jar yesterday and use 'saveToEs'
method instead of saveAsNewAPIHadoopFiles. I get the following error when I
included the 2.1.0 jar.
scala.MatchError: org.apache.hadoop.io.MapWritable@75a899ad (of class
org.apache.hadoop.io.MapWritable)
at
org.elasticsearch.spark.serialization.ScalaMapFieldExtractor.extractField(ScalaMapFieldExtractor.scala:10)
at
org.elasticsearch.hadoop.serialization.field.ConstantFieldExtractor.field(ConstantFieldExtractor.java:32)
at
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.append(AbstractIndexExtractor.java:101)
at
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.field(AbstractIndexExtractor.java:119)
at
org.elasticsearch.hadoop.serialization.field.AbstractIndexExtractor.field(AbstractIndexExtractor.java:31)
at
org.elasticsearch.hadoop.serialization.bulk.AbstractBulkFactory$FieldWriter.write(AbstractBulkFactory.java:73)
at
org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.writeTemplate(TemplatedBulk.java:77)
at
org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.write(TemplatedBulk.java:53)
at
org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:130)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:33)
at
org.elasticsearch.spark.rdd.EsRDDFunctions$$anonfun$saveToEs$1.apply(EsRDDFunctions.scala:43)
at
org.elasticsearch.spark.rdd.EsRDDFunctions$$anonfun$saveToEs$1.apply(EsRDDFunctions.scala:43)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
at org.apache.spark.scheduler.Task.run(Task.scala:51)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
saveAsNewAPIHadoopFiles work from Spark but I would like to use
dynamic/multi resource feature to dynamically create the index name from
Spark. Any help is greatly appreciated.
I am using elasticsearch-1.2.1,
elasticsearch-hadoop-2.1.0.BUILD-SNAPSHOT.jar, spark-streaming_2.10-1.0.1
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/671d4d05-5663-49e9-b034-a139c2198fd7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.