I have multiple long running jobs that produce alot of logs. We are using logrotate utility of Linux to rotate the logs. The problem is that filebeat can miss logs. For example, if I have a log file named output.log and logs are written to it at high frequency. As soon as the log file reaches 200M, we rotate it.
If filebeat is down or is a bit slow then it can miss logs because output.log content has been moved to output.log.1.
If we also scan output.log* files then we have duplicates.
- How to design a solution with filebeat and logrotate that we don't miss a log message?
- Can filebeat also rotate files? Since filebeat knows how much it has processed, if it can rotate then it would be the best solution.