Hi all,
We are exporting a csv using cronjob and put it inside a a folder. A filebeat will process and read the file before for visualization in Kibana. (Filebeat and the folder are in the same host)
The csv is a very straight content, where inside they have two lines, header,a second line is the data.
The first time file being exported, filebeat process and we can see from discover dashboard. however, for subsequent job, filebeat seems 'ignore' it. alhtough every 15 minutes or so, the data inside the csv keep changing.
So i have been testing the following where both triggered filebeat to process csv.
- Edit the content like adding some number and save it (same name) and
- copy and rename into different name.
so what i did temporary now, before export, it will save into current date and time and then export it. This trigger filebeat to process because it sees a new file.
On the other hand, the same filebeat also read other directory, json file. But it is able to recognize the file changes and trigger filebeat to process the json file. The only different between csv and json is csv constantly having 4k size but the json always random, it can be from few mb to a gb.
This come to a question, on what situation we can trigger filebeat to process the file? initially i thought it based on timestamp of the file but that is seems not the case but the size might affect how filebeat process the file.