Hello, I'm using spark SQL in order to extract data from elastic into csv files.
used software versions :
- elastic 5.5.2
- spark SQL 2.1.1
- elasticsearch-spark-20_2.11 5.5.2
- scala 2.11.8
I query elastic using alias.
When the alias is updated (remove and add new indices) during spark job processing, this last one fails.
Does it possible to preserve the same index from the beginning to end of the spark job execution (atomic process) ?
My workaround is searching the indices associated to the alias at the beginning of the spark job and initialize the dataframe with these indices.
Does it exist a better solution ?