Hi,
We used the filebeat to ship the logs to logstash. We used the filebeat version 7.3.1. We do set the
harvester_limit: 30
close_timeout: 15
close_inactive: 5m
for the input. It works fine for a while . At certain times whenever the harvester_limit configured is hitting the limit , its keeps increasing the resident memory until the harvester well in the threshold or until we restarted it.
I checked the 2 memory peak all happens at the times when the filebeat started to hitting the error.
Aug 11 15:27:30 filebeat[126759]: 2021-08-11T15:27:30.154Z#011ERROR#011log/input.go:519#011Harvester could not be started on existing file: 4.txt, Err: harvester limit reached
Is this issue is already addressed and fixed in new releases ?