I am using Logstash to transfer data from Elasticsearch to MongoDB. Now whenever there is an error or a Shutdown of Logstash instance, upon restarting Logstash, it starts reading the indexes in Elasticsearch from the beginning, causing duplicity of data and a significant amount of time to get the latest data to the mongodb. Is there a way to tackle this in Logstash, or some custom solution is the way forward this problem?
This is a known issue. I am not aware of a workaround.
1 Like
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.