Catching exceptions from saveToEs (elasticsearch-spark)

Well, if there's a failure on the Elasticsearch side, I'd like to be able to fail gracefully - and not to have my whole job fail. In my specific case there is no simple way to do the checks beforehand, so handling the exception would be easier. Do you see any solutions to this? Maybe a saveToEs parameter to "ignore" the exceptions and log them somewhere?

1 Like