I am using Logstash to dump my data from Postgres to Elasticsearch. So the input is JDBC that is fetching data from Postgres using tracking_column and let's say for one run it is taking 1000 records from Postgres
Output is Elasticsearch and it is dumping those 1000 records in Elasticsearch. So if there is an error while dumping data to Elasticsearch logstash got some error lets Elasticsearch node is unreachable due to some reason then logstash is just incrementing to the next run.
I want to know is there some way I can rerun the same run if there is some error or at least dump the data into some file so that I can run it manually afterwards.