Will filebeat handle properly the underlying log file rotation?

If filebeat monitors mysqld.log file (MySQL log file) and that file gets rotated out - renamed into mysql.log.1, and new mysqld.log is created - will filebeat continue to read new mysql.d, assuming filebeat.yml has listed mysqld.log as file to monitor? Will it pick up new file with the same name and 'switch' monitoring to the new file - given that the name is the same?

1 Like

Yes. Filebeat will continue to read from the rotated log even after it is moved until the file reaches a certain age (base on modified time) or is deleted. It tracks the file by inode number which doesn't change when renamed. It will also periodically look for a new files matching the mysqld.log file name so that can start reading from the new log file when it is created.

4 Likes

Great - so there is no need to specify mysqld.log* - mysqld.log will work too ...

Thank you!

while filebeat uses file metadata like inode and device id in order to track files, it still requires the full path in order to open files. The prospectors can only find files by path and do check inode/device id against known set of inodes.

If a file is simply renamed or closed by the logger, but still open by filebeat, filebeat will process until end (will not close file handle yet). But if filebeat gets restarted in between, it needs to find the renamed files again. In order to find renamed files, the glob pattern must match rotated files too. So to say, it's better use pattern mysqld.log*.

3 Likes

Sure - renamed files could be written to after they got renamed ... Good point;

Thank you!

Hi I have this question as well? Sorry, this is probably my English, in the scenario described above, are you saying Filebeat will still be reading mysql.log.1(previously mysql.log), even when a new file called mysql.log is created? My ignore_older is still default which is 24h, I feel like this has caused some records to be missing in my case. If my file rotation happens very frequently say like every 5 mins, should I set my ignore_older to around 3 mins?
You said it periodically look for a new file, do we know how often? so the moment it finds a new mysql.log created, it will release the lock on the currently mysql.log.1 and start reading the newly created mysql.log? Then what role does ignore_older play here if this is the case?

Thanks for all the work on Filebeat!

are you saying Filebeat will still be reading mysql.log.1

Yes and no. if file was open by filebeat, is rotated (while being open), filebeat it's handle will be valid and filebeat finishes processing the file. If file was not open by filebeat, is rotated, it depends on glob-pattern if filebeat will process file.

My ignore_older is still default which is 24h, I feel like this has caused some records to be missing in my case

which filebeat version are you using. we did set default ignore_older to infinite (never ignore files by default) and introduce a new close timeout mechanism.

If my file rotation happens very frequently say like every 5 mins, should I set my ignore_older to around 3 mins?

Why? If file has been fully processed, it will not be send again (even if rotated).

You said it periodically look for a new file, do we know how often?

Search for scan_frequency. I think default is like 10 seconds.

so the moment it finds a new mysql.log created, it will release the lock on the currently mysql.log.1 and start reading the newly created mysql.log?

No. filebeat does not lock files, but it will continue processing mysql.log.1 until EOF + close_older timeout hits. The newly created mysql.log will be processed in parallel.

Then what role does ignore_older play here if this is the case?

No idea, depends on your filebeat config. When close_older has been introduce, ignore_older was set to infinite (never ignore files). ignore_older is used to filter out files found by prospector. That is, if file is old then ignore_older, no worker is started processing the file. If some worker is currently processing the file ignore_older used to decide when to close the file (this changed when close_older has been introduced).

1 Like

Thanks for the response Steffen:

The problem I have now is that in Elasticsearch/Kibana, we have what seems to be parsing errors that's caused by record overlaps, for exmaple:

Record A: Hello World!
Record B: What's up?

There are number of records actually look something like:
Hello WWhat's up?

In this case, Message A intercepted by Message B and Message A never finishes.

We are using log4j file rotations, it rename the current log file "mylog.log" to "mylog.log.1" after reaching 10 mb and creates a new file of the same name "mylog.log" to write to. The weird thing is the backup file(rotated) "mylog.log.1" is "locked" and can't be opened/copied even by system admin. I was guessing Filebeat was still locking the file until 24 hrs later (We use the older version where the default ignore_older is 24h) , but you said that's not the case. The glob we set in filebeat.yml is "...\logs\mylog*". Any help would be appreciated.

For the open file handler, it could be that you hit this issue here: https://github.com/elastic/beats/pull/2029

About the line: It is very strange to have a merged line. I haven't seen this before. Is the overlap for the last or first line? Can you share some real lines?

1 Like

which filebeat version have you installed?

Filebeat version 1.1.2

  1. in some very old version there has been a buffering problem in filebeat, but not sure which versions are affected

  2. please open another discuss topic for new problems instead of hijacking other discussions. Hijacking makes it hard to keep track in general.

This topic was automatically closed after 21 days. New replies are no longer allowed.