Keep restarting Filebeat, Filebeat will lose some log

I'm testing Filebeat 7.12.0 for possible loss of logs, my input configuration is as follows:

- type: log
  enabled: true
  paths:
    -/xxx/xxx.log

I also wrote a python program: write 50,000 logs, and sleep for 1~2 seconds each time a log is written. In the process of writing 50,000 logs, filebeat is restarted every 20 to 25 seconds until 50,000 logs are written.

In most cases, filebeat can collect 50,000 logs. However, there is a very small chance (approximately every 50 tests) that filebeat will miss a log. That is to say, filebeat can only collect 49999 logs.
When 1 log is lost, filebeat does not throw any error log. However, I found that whenever a log is lost, it is always accompanied by a metrics of event.failed=1

"pipeline":{"clients":0,"events":{"active":0,"failed ":1,"filtered":6,"published":2986,"retry":20000,"total":2993},"queue":{"acked":2986}}},"registrar":{"states ":{"current":3,"update":2992},"writes":{"fail":0,"success":39,"total":39}}

Does anyone know why this is? Doesn't filebeat claim that it doesn't lose logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.