Mapper_parsing_exception: failed to parse ( Dataframe to Elastic - batch process)

We are adding documents through a pyspark dataframe to elastic.

.option('es.nodes', '')
.option('', 'true')
.option('es.nodes.wan.only', 'true')
.option('', 'file:///opt/escerts/test.jks')
.option('', 'password')
.option('es.resource', 'index_name/_doc')
.option('es.batch.size.bytes', '70mb')
.option('es.batch.size.entries', '23000')
.option('es.batch.write.refresh', 'false').save()

Earlier I tried with .option('es.resource', 'index_name') and got this error. So I modified to 'index_name/_doc' and error gone.
org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: No type found; Types are required when writing in ES versions 6 and below. Expected [index]/[type], but got [index_name]
Finally, it ended up with this error. In the previous version, it worked fine. This happens after an upgrade. Index id is coming with separate node, which is causing issue. When tried it in kibana by adding this doc without {"index":{"id":"3J000499841"}}, it is added without any issue. Please help me to solve this issue.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (ip-10-92-229-81.ec2.internal executor 1): org.apache.spark.util.TaskCompletionListenerException: mapper_parsing_exception: failed to parse

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.