Today I was trying to get FileBeat up and running. I have logstash, elasticsearch and kibana all setup but i'm new to FileBeat.
Previously I had logstash running on an ubuntu VM, smb mounting windows shares, and correctly tailing log files from multiple servers. These log files roll over whenever they hit 50mb. Logstash was handling this fine.
I've now got filebeat running on a windows2012r2 server pickup up files from the same windows shares. all is good until the files roll over.
They way the files are rolled over is somelog.log will be renamed somelog.1.log and a new file will be created somelog.log.
Should filebeat be able to handle this, I read through the doco, played with force_close_files and tailfiles but no luck.
The windows share thing is because i'll trailing ELK for our log analytics and can't get approval to run filebeat on the actual servers just yet.
Sorry no config atm, at home with beer. but has anyone seen this issue and got the magic bullet?
filebeat:
prospectors:
-
paths:
- \\winserver\logs\single.log
input_type: log
fields:
sourceType: someservicename
tail_files: false
force_close_files: true
output:
elasticsearch:
enabled: false
hosts: ["elasticserver:9200"] #if this is deleted config is reported as invalid.
logstash:
enabled: true
hosts: ["1.1.1.1:5044"]
index: appname
The shares are backed by Windows.
I had read through to that network volume part and as I mentioned I can't run filebeat in prod just yet, Unfortunately our servers are still pets.
If "For example, changed file identifiers may result in Filebeat reading a log file from scratch again." this was happening I'd be happy. As my understanding on that is that if a file changes name it'll re-read from the beginning. As from my config i'm monitoring only a single file. When it hits 51mb it gets renamed and a new file is created with the same name. This is where filebeat stops working.
Results from -e -d '*' in next post because of size constraits.
in paths you might consider using \\winserver\logs\single*.log, such that after file rotation the rotated content will be send until file end (in case filebeat didn't process complete file yet).
After file rotation the new 'single.log' should have a new file id and new file should be picked up.
Is your application using some logging library, e.g. log4net? Is file really renamed or copied?
I don't see anything of interest in logs . Maybe run with "-v -e -d '*'" ? I'm basically missing a log line saying 'Force close file' (which is logged at INFO level). Can you grep the log file for this message? force_close_file option is active if file does not exist anymore or the volume/file id changed for same file name (file/volume ids uniquely identify a file).
As I don't have much experience with windows network shares on this low level i'd like to see what happens with volume and file id on file rotation.
If possible let's run some small experiment:
check log directory is empty
start filebeat it capture full log
start application to generate log
wait log file is rotated and filebeat stop processing
stop filebeat
copy registry file (default is .filebeat I think) into (registry1)
delete registry file
start filebeat and wait for another log rotate and stop filebeat
copy registry file into (registry2)
What's the content of registry1 and registry2 after processing? The registry file stores file informartion like offset, path, volume/file id in json format. The volume/file id must be different for filebeat being able to detect a file was rotated.
If volume/file ids differ in registry files, but the 'Force close file' log message is still missing it points towards a bug in filebeat.
Sorry for the inconvenience, but properly supporting windows backed volumes is quite tricky in itself. Supporting network shared windows volumes is no easier. Any help is much appreciated.
@jamesleech: @ruflin mentioned windows network shares to be a hole different beast. He tried once and ran into multiple different issues at once. Like lower file id changing by random (for same file, due to SMB/CIF having to make up some metadata) or metadata just not being updated (maybe due to metadata caching).
We'd strongly vote for not using windows, but collect log files from disk server.
Did you have any luck with that ?
I am also having Filebeat on a Centrol Windows server collecting logs from remote Application servers using Shares.
Accessing the files using FQDN.
\app-server\logs\mylog\file.log.*
while files can be:
file.log
file.log.1
file.log.2
.
.
.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.