Multiple fast consecutive updates on the same document

Hi there,

I hope someone can help me with a very specific problem I have.

We integrated Elasticsearch in our multi-tenant-system.

Based on our system we forced to use update_by_query instead of update.

This system creates multiple consecutive updates to Elasticsearch for the same document in under 1 second.
Because of this we will lose some data from requests because some updates are not executed.
I guess that the data is lost becuase a document is available for new updates in 1 second.

Now my question: Is there any possibility that these request are not lost? Is there some kind of cache that can be activated that I overlooked in the documentation?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.