Do you mean that the file is overwritten each time new logs are created? In that case it can be difficult to know when the file has to be read from the beginning.
To avoid this problem and to avoid having log files growing infinitely, the usual approach is to rotate them, that means to move the existing file to a new path from time to time and create a new file in the path you are collecting. There are multiple strategies to implement this, you can use some software like logrotate, or you can implement it in your own application.
Other option that could fit in your case is to write each time to a timestamped file, so every time you write a new CSV it is in a new and different file. Then you can use something like tmpreaper to remove old files. You could add a path with a wildcard in filebeat so it collects from all possible files, take a look to close_inactive and close_eof settings if you try something like this.