I encounter a bug on filebeat, Filebeat sending the whole log again
Is there a way for filebeat not to read log history so far, but rather for the last 60 to 100 lines?
Filebeat uses registry to track what files it read and for each file the offset.
It is not designed to start reading from any random location.
Is there a way to force filebeat not to sending again log lines that have already sent
You mean to not send duplicate lines if the same lines have been sent earlier.
It is not possible directly in Filebeat but you can go through https://www.elastic.co/blog/efficient-duplicate-prevention-for-event-based-data-in-elasticsearch to remove duplicates on ElasticSearch side
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.