Filebeat does not pickup rotated logfile from NLog

Hello,

I'm currently trying to get our structured event logging into Elastic Search. We have a .NET 5 application that is running in a Docker container and it runs on a CentOS 7 server, where it writes the log to a server filesystem location (so outside of the container). I have a Filebeat setup to read this file and send it to logstash.

When I start Filebeat, whatever file is there is processed correctly. However, I discovered that when the file it rotated (this is done by NLog itself, not the Linux logrotate utility), Filebeat doesn't pick up the new file until more events that the old file are written. Which seems to indicate to me Filebeat does not see this as a new file, but rather the same file, maybe truncated.

So I checked the NLog source code and it does appear to perform a File Move action. I checked the inodes of the files and the old file is indeed moved to an archive directory (same inode, other location), while a new file appears in place of the old one, with a new inode. Looking at the Filebeat registry I see some old files (based on their inode), but newer files don't appear in the registry.
Until the file reaches its inactivity threshold, then Filebeat closes the file and immediately notices the new file. And starts to process that.

Any ideas how to fix this?

ADD: I am using Filebeat 7.13.2

This is the Filebeat config:

filebeat.inputs:
- type: log
  enabled: true

  paths:
   - /var/log/app/structured*.json

  tags: ["app", "structured-log", "asd"]

  close_inactive: 15m
  close_removed: true
  clean_removed: true

  harvester_limit: 1

- type: log
  enabled: true

  paths:
   - /var/log/app/error*.log

  multiline.pattern: '^\[\d{4}'
  multiline.negate: true
  multiline.match: after

  tags: ["app", "error-log", "asd"]

  close_inactive: 15m
  close_removed: true
  clean_removed: true

  harvester_limit: 1

filebeat.config.modules:
  enabled: false

processors:
  - drop_fields:
      fields: ["host"]

queue.mem:
  events: 4096
  flush.min_events: 256
  flush.timeout: 10s

output.logstash:
  enabled: true
  hosts: ["elasticsearch servers"]
  timeout: 5m
  bulk_max_size: 1024
  slow_start: true

logging:
  level: info
  to_files: true
  to_syslog: false
  files:
    path: /var/log/filebeat
    name: filebeat
    keepfiles: 3
    permissions: "0644"
  metrics:
    enabled: false

A co-worker solved the riddle for me. And it's pretty simple when it was pointed out.....
I have a harvester limit of 1, so the file is moved, but keeps the harvested occupied. And since no new harvesters are allowed, the old file is not yet processed. Until the inactivity timer is reached, and then the new file is pcoessed by the now available, single, harvester.

So simple....

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.