I have a use case where I need to parse and index csv files that are periodically collected. The challenge is that the CSV file could grow and get overwritten with more rows for some time until the next CSV with a new filename is created.
Is there any way for Logstash to read from the last row/record when it re-parses the same (but larger) CSV file or will it just need to re-process it and up-version the existing record in Elasticsearch?