Using custom _id (from Kinesis)

Hello,
Our app uses Spark job to read from Kinesis, and write to Elastic.
We got a situation in which the job failed to write to Elastic, but kept reading from Kinesis, which made us missing data.
Trying to figure out a solution, we thought about using the kinesis checkpoint, which is unique, as the elastic document _id.
This will mean, that on any failure, we could just rollback to know checkpoint, and restart the job, which will just overwrite existing documents (if happens).

What do you think ?
Is using custom id is a proper solution ?
What about performance of Elastic in this case ?

Thanks,
Shushu

This could work, but why not handle the failure in your code a bit better?

Making our code is certainly true, and we keep making it better all the time.
The problem is - stability is still shaky, and if we must keep 100% of the data streaming properly, we can't say "no, our job will never break".
It is software - it will break for one reason or the other. The question is - how do we revive from the failure.
So, when you say "this could work" - what does it mean ?