Bulk Operation Results from Databricks Spark Job

Hello ,

We are using ElasticSearch 5.0

I am performing bulk writes from dataframe to elastic search using spark , writes are performed using .option("es.write.operations","upsert") and .mode("append") . Note we also set other options related to batch size (bytes and entries) .

I am trying to understand if there is a way to capture bulk output results either in spark or a way to force all bulk operations in ES to store the results (tasks) automatically so that i can look it up . what is the best way to do this ?

Note : My eventual goal is to track and figure out if there are any consistency issues during parallel bulk operations causing partial updates/inserts to the documents and identify the affected documents.

Anyone ?

keeping the thread alive .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.