In our project we scheduled archiving of log-files so they do not get too large. But the problem is filebeat harvests files periodically depending on
But what if some log entry is written and the task to archive logs is ran in between the attempts to harvest log files? Then I miss the log entry which might be fatal depending on what this entry contained.
Is there a "true filebeat way" to cope with the situation?
As a possible solution I could set the
max_backoff intervals small enough so this situation won't happen. But this would cause quite a lot of overhead due to too often harvesting.