How to ignore exceptions when bulk update with pyspark if doc doesn't exist

Hi,

I am trying to do an update operation with elasticsearch hadoop package in pyspark. It says on the documentation that if no data is found, an exception is thrown. What is the best way to handle this exception? Or is it possible to pass something like raise_on_exception=False, raise_on_error=False provided with python elasticsearch API?

Thanks!

@tree currently there is no way to supress the error when it occurs. If a value is missing when an update is executed, there's nothing for the connector to do but fail the task. A different option would be to avoid the update operation and instead try to stick to the upsert operation as it has fewer cases where it might fail.

Thank you! @james.baiera

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.