Handling failures with Logstash

We have a logstash pipeline with JDBC input and Elastic search output.

If the log stash pipeline is scheduled to pick deltas every 15 mins and in 15 mins there are 1000 updates, what happens if the logstash pipeline fails in the 7th minute after processing 400 records. If it is restarted does it start from delta #1 or from delta #401? During our tests we found that it starts at delta #1. Is there a way to avoid this?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.