FileBeat loses 10 seconds of logs on rotation

Hi!

My FileBeat is losing 10 seconds of approximately logs. Obviously the problem is in the rotation of the files, but I can not find the correct option so that these logs are not lost.
The Rotation of Logs performs Log4J:

655947 -rw-r--r-- 1 user group 21M Aug 20 09:34 /opt/weblogic/logs/app/app-150.log
655948 -rw-r--r-- 1 user group 20M Aug 20 18:19 /opt/weblogic/logs/app/app.log
----- rotated -----
655948 -rw-r--r-- 1 user group 21M Aug 20 09:34 /opt/weblogic/logs/app/app-150.log
655949 -rw-r--r-- 1 user group 20M Aug 20 18:19 /opt/weblogic/logs/app/app.log

By listing the Inodes, when rotating the log, the inode is maintained for the rotated log (150) so I understand that FileBeat should read it until the end but it does not happen.
In the log of FileBeat it is indicated that it was truncated and starts again to read the file without finishing reading the 150. Is this my basic configuration, any idea?

- type: log
  enabled: true
  fields:
    log: "app"
    server: "server4"
  paths:
    - /opt/weblogic/logs/app/app.log

Should I use any of these options: scan_frecuency, close_inactive, harvester_buffer_size ??
or maybe use the path of the log and the rotated, like:

  paths:
    - /opt/weblogic/logs/app/app-150.log
    - /opt/weblogic/logs/app/app.log

Thanks for the help!

I would use a path like /opt/weblogic/logs/app/app*.log or else it no longer matches and filebeat may stop caring about it even though it wasn't done reading it.

Thinking the same I did the test of using the paths:

/opt/weblogic/logs/app/app.log
/opt/weblogic/logs/app/app-150.log (rotated)

But it had duplicates every time it rotated. Is there a difference if I use the wildcard? (there are 150 files per server)

Idk if it's treated differently. It may be since it's the same glob as opposed to 2 different paths, but that's just a guess. I'd try just to see.

- type: log
  enabled: true
  close_inactive: 5m
  fields:
    log: "app"
    server: "server4"
  paths:
    - /opt/weblogic/logs/app/app*.log

I tried this configuration but I keep losing the end of the rotated files and I don't see it closing the inactive ones for more than 5 minutes. Also I have the following error in the log:

2021-08-24T13:39:12.465Z ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: 429 Too Many Requests: 429 Too Many Requests /_bulk

I changed my settings to the following ... which I think is the right thing to do, but I keep wasting those seconds at the end of each rotated log:

- type: log                                                                                                                                                               
  enabled: true                                                                         
  harvester_limit: 10                                                                                                               
  ignore_older: 72h                                                                                                                                                       
  close_inactive: 5m                                                               
  clean_inactive: 74h                                                                                                               
  fields:                                                                                                                                                                 
    log: "app"                                                                                                                                                         
    server: "server4"                                                                                                                                               
  paths:                                                                                                                                                                  
    - /opt/weblogic/logs/app/app*.log
   

(ps: I am no longer having truncated messages since I remove close_rename)

ps2: The log usually rotates frequently ... but it never takes the last lines of the rotated log ... even though the harvest is alive.

this happens frequently ... is it possible that it is closing before finishing reading the log? :

2021-08-25T13:42:04.536Z        INFO    log/harvester.go:333    File is inactive: /logs/app-150.log. Closing because close_inactive of 20m0s reached.          
2021-08-25T13:42:04.536Z        INFO    log/harvester.go:333    File is inactive: /logs/app-149.log. Closing because close_inactive of 20m0s reached.   
2021-08-25T13:42:04.569Z        INFO    log/harvester.go:333    File is inactive: /logs/app-145.log. Closing because close_inactive of 20m0s reached.                                                                                                                                                                        
2021-08-25T13:42:04.569Z        INFO    log/harvester.go:333    File is inactive: /logs/app-146.log. Closing because close_inactive of 20m0s reached.                        
2021-08-25T13:42:11.174Z        INFO    log/harvester.go:302    Harvester started for file: /logs/app-145.log                                      
2021-08-25T13:42:11.175Z        INFO    log/harvester.go:302    Harvester started for file: /logs/app-146.log 
2021-08-25T13:42:11.176Z        INFO    log/harvester.go:302    Harvester started for file: /logs/app-149.log                                     
2021-08-25T13:42:11.176Z        INFO    log/harvester.go:302    Harvester started for file: /logs/app-150.log 

up ?

One thing I saw in another post similar to this is does the final log entry end with a new line character, \n? Without it filebeat waits expecting more to be written.

Apparently my rotation of logs was making disasters with the inodes ... if someone reaches this post I suggest you check that. With an ls -li to see how the inodes and file names in the logs change.

Greetings!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.