I can understand the reason this keeps happening but I'd like to have a way to treat it. Maybe to just ignore these with a drop or something similar to it. I've checked a few ways to grab Logstash events with the Ruby filter plugin. But nothing that reached this line of status/error which I'm interested at.
I do not think a single logstash instance can produce this exception by itself. If elasticsearch reads a document to do an update, it double checks the version when it writes the document back to the index. If the version in the index is not the version it updated then someone else sneaked in an update ahead of it, and it generates this exception.
I believe a logstash output is single threaded, I see no way for it to cause two parallel updates to the same document.
It could be a second logstash instance, or something else calling the API.
@leandrojmp and @Badger, thanks for the answer. The thing here is not exactly the "why" it is happening. It is exactly as you two have said. The documents are being created and updated at the same pace.