I am facing difficulties over Filebeat.
My present scenario is that I am using Filebeat (installed in a windows server) to send data (in csv File format) to Logstash and finally to Elasticsearch (both tools installed in a linux server).
The way that the csv file is generated, is through a powershell script ,which is launched during the execution of a job.
This job is launched several times a day, and as a result more lines will be added to the csv file, at any point in the file.
It happens sometimes that some lines where are present in the csv file, are not present in the elastic indexed documents.
Csv file at a certain time during the day
Csv file at a later time with new lines inserted during the day
So, my questions are as follows:
• When a new file is added in the filebeat directory for ingestion , a harvester is started for the file. Does the harvester detect all changes made in the csv file even though the new lines are not necesarily inserted at the end of the file?
• How do we verify that the harvester has treated all line in the csv file?
Thanks in advance.