So we have a service that has an hourly log rotate because the volume of log entries. Filebeat for the most part recognizes that a new file has been created and correctly sees the new entries logged to the log file. There are instances at times where Filebeat will not parse a log file because it thinks there are not new changes to the file. Then when the next hour hits and a new log file is created, it sees those new log entries. Any clues on what may be the cause or should I say increase scan_frequency? We're currently using 1.3.1.
Try upgrading to 5.0?
What does your config look like?
Going to hold off on upgrading for now unless it's required to resolve the issue. Here's my config:
filebeat:
prospectors:
-
paths:
- /var/log/messages
- /var/xyz/business/logs/su/tel-*.log
- /var/xyz/business/logs/suproxy/tel-*.log
input_type: log
multiline:
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
pattern: ^[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}
negate: true
match: after
max_lines: 10000
timeout: 60s
registry_file: /var/lib/filebeat/registry
Can you share your filebeat log file? It could be that you hit an inode reuse issue which you could solve in 5.0 by using clean_removed.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.