Hi,
We were looking to tune our filebeat.autodiscover
settings to better handle scraping of ephemeral docker containers. The main issue we have is that when the container is stopped and removed, its log file also gets removed. Filebeat closes the harvester and does not finish scraping the remainder of the file as a result.
We've been told that setting close_removed: false
should help with the above situation, which should keep the harvester open even if the corresponding logfile is deleted/removed.
In the filebeat documentation, it also states that if close_removed
is set to false
, then clean_removed: false
should also be set.
We were wondering then, does the state for these harvesters/files ever get removed afterwards at any point? Will the clean_inactive
(or some other clean_*
) setting help with removing the state once the harvester finishes reading the logfile?
What are some other settings we should set/change so that the filebeat registry does not grow unbounded?
Let me know if there is anything else I should clarify on. Thank you in advance!