FileBeat file is falling under ignore older

Hi, I am getting tons of this errors in filebeat

2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG1.lOG
2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\Log2.log
2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG3.loG
2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG4.lOG
2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG5.Log
2016-09-22T06:56:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\log6.log
2016-09-22T06:56:22+02:00 ERR Failed to publish events caused by: EOF
2016-09-22T06:56:22+02:00 INFO Error publishing events (retrying): EOF
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG1.lOG
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\Log2.log
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG3.loG
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG4.lOG
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG5.Log
2016-09-22T06:56:32+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\log6.log
2016-09-22T06:56:42+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG1.lOG
2016-09-22T06:56:42+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\Log2.log
2016-09-22T06:56:42+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG3.loG
2016-09-22T06:56:42+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: d:Logger\LoggerDw11\LOG4.lOG

as you can see in the middle there are a lot of EOF too, but that's an other thread

for each of the files FB configuration is the following

  fields_under_root: true
  ignore_older: 10m
  close_inactive: 2m
  clean_inactive: 15m

this especially happens on log that are rarely updated or during night when there is a little load, it also happens when the file should be ignored form the beginning (i.e. last update 1h ago)

any ideas? today I also added the filebeat log itself to my prospectors, and the error happened to its file:

File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: c:\Program Files\Filebeat\logs\filebeat

Settings are the one I wrote up there
Logs are correctly imported but the error is there

any idea?

thanks!

As long as events are not acked by the receiving end filebeat will not release the file handler as this could lead to data loss. If you want filebeat to force closing your files after a certain time use close_timeout: https://www.elastic.co/guide/en/beats/filebeat/master/configuration-filebeat-options.html#close-timeout If you use this feature it is important that you understand the side effects.

So you are saying logstash is not sending ack? I tried debugging the log pipeline and I see the logs in kibana.
That's exactly why I connected the filebeat logs itself: I see that error in kibana so logs are going straight forward to the destination, not more than 10s after they are created.
Furthermore it never happens in peak time, I usually see it during night when the pipeline is off load.

how are those ack messages sent from logstash back to filebeat? does it use a special port or something else I can check?

I assumed this happens during peak times or in case there are issues with LS. Didn't check the thread FileBeat EOF Error in detail yet but my initial reaction was this sounds very related.

It would strongly discourage to connect filebeat to itself as you could run in an endless loop in case of errors. I would recommend to use a second filebeat instance for this.

I suggest we try to solve this issue here first (FileBeat EOF Error) And then check if the above error disappears?

From what @steffens says the EOF it's not an error I should take care of, furthermore ignore_older and EOF are not strictly related: ignore_older happens also during day (i.e. the filebeat log itself: i get error all day long) instead EOF happens only during night and off load period.

  • Which filebeat version are you using?
  • Any chance to share some log files with INFO log level or even DEBUG log level? Perhaps there is some more info on why the file was not closed.

Depending on which version you use, you could hit the issue here: https://github.com/elastic/beats/pull/2539

version 5 beta (downloaded last week)

I do have log in info but not debug: it sends thousands of lines per seconds, using it in debug mode is dangerous because logs are full of my data sent and no actual filebeat logs

2016-10-11T14:17:00+02:00 INFO Error publishing events (retrying): EOF
2016-10-11T14:17:06+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=9 filebeat.harvester.running=-1 registrar.states.update=14336 filebeat.harvester.closed=1 filebeat.harvester.open_files=-1 libbeat.logstash.publish.write_bytes=570245 libbeat.publisher.published_events=6099 publish.events=14336 libbeat.logstash.publish.read_errors=2 registrar.writes=7 libbeat.logstash.publish.read_bytes=60 libbeat.logstash.published_and_acked_events=7332 libbeat.logstash.published_but_not_acked_events=1858
2016-10-11T14:17:36+02:00 INFO Non-zero metrics in the last 30s: registrar.states.update=30720 registrar.writes=15 libbeat.logstash.published_and_acked_events=18455 libbeat.publisher.published_events=19526 publish.events=30720 libbeat.logstash.publish.read_bytes=192 libbeat.logstash.call_count.PublishEvents=15 libbeat.logstash.publish.write_bytes=911769
2016-10-11T14:18:06+02:00 INFO Non-zero metrics in the last 30s: publish.events=28672 registrar.states.cleanup=1 libbeat.logstash.publish.read_bytes=198 libbeat.publisher.published_events=17497 libbeat.logstash.published_and_acked_events=18687 registrar.states.update=28672 libbeat.logstash.publish.write_bytes=968170 libbeat.logstash.call_count.PublishEvents=14 registar.states.current=-1 registrar.writes=14
2016-10-11T14:18:15+02:00 INFO File is inactive: d:..... Closing because close_inactive of 2m0s reached.
2016-10-11T14:18:18+02:00 ERR Failed to publish events caused by: EOF
2016-10-11T14:18:18+02:00 INFO Error publishing events (retrying): EOF
2016-10-11T14:18:31+02:00 INFO File is inactive: d:.... Closing because close_inactive of 2m0s reached.
2016-10-11T14:18:32+02:00 ERR Failed to publish events caused by: EOF
2016-10-11T14:18:32+02:00 INFO Error publishing events (retrying): EOF

Today we found duplicated lines of logs so we decided to remove the ignore_older, close_inactive, clean_inactive from all prospectors and give it a try. How much can FB memory grow in this way?

Is it a dangerous config?

In the above logs you posted, there are quite a few EOF but non of the ignore_older messages. You you post an excerpt of the log with these messages inside?

What happens with the above config is that it keeps the state for all existing files. As clean_removed and close_removed are enabled by default and there is also a default of 5m for close_inactive you should not see any issue, as long as you don't keep thousands of files that have to be checked every time by filebeat.

Side note: Duplicated lines often not caused by the above config options but in case of network issues. So in case a package was actually received by LS but the ack never reached filebeat, the line will be sent again.

This topic was automatically closed after 21 days. New replies are no longer allowed.

You're right I pasted the wrong logs, anyway there where nothing useful in there, I went through it again and again but nothing was different from one I posted in the first post.
Right now I removed all those conf, but FB sizes in memory is growing a lot, so I think a have to write them back, this night I wil try to run one instance in debug mode e and let's see if it contains something more.

However with this configuration I am still having EOF but no duplicated lines!

My situation is that I have about 20 prospector for 20 single files, each of them is rotated and renamed after it reach around 10MB. the renamed file can be closed and ignored. the newly created file (with the name specified in the prospector) should be started.
e.g. prospector start on myapp.log, after it reaches 10 MB it is renamed in myapp20161013111000.log and a new myapp.log starts
I want to keep only the new prospector active, on the main log not the older renamed files

@ruflin I found an other thing: now I have 24h for close and 30 for ignore older and no error is coming, but digging in the log file I found that sometimes last edit time as fixed at some time let's say 10a.m. but the file is named 11.30a.m.
The last line written in the log is at 10 a.m. (some second before) but the file as been definitely rotated at 11.30 even if the last edit time was not updated.

is it possible to rename a file without changing the edit time? Can it be the cause? Does a renaming force the harvester to move on?

Second I was considering the fact that some log file is "sleeping" for hours: no lines written at all. obviously edit time" does not move forward until it start writing again hours later. This holds for some of my logger but on the other side filebeat log file change every minute so I am really out of idea why this is happening

Now I have this new error, after a restart of filebeat

State for d:Logger\Log1.log should have been dropped, but couldn't as state is not finished.

edit: this one was solved via a clean restart, don't get why it was not able to reload its own state. but still want to understand the ignore_older stuff

About the modification time: There are two different timestamps here: One is the modtime of the file and the other one is the modtime of the state in filebeat. The modtime of the file does not get updated on rotation but the modtime of the state in filebeat does as filebeat keep track on when the state was updated. The reason for not relying on the modtime of the file is that "finishing reading file time" >= modtime and the modtime of the file is not always reliable (windows). So if your file gets constantly rotated the timestamp of the state gets reset. But in case the file didn't change after rotation, no new harvester should be opened, so the state should be removed.

About your post above: but on the other side filebeat log file change every minute: What do you mean by that? How is the filebeat log related to your harvesting of log files?

what you said is correct and fine for my case! :slight_smile:
but I still have that error

filebeat has a prospector on its own log file

this is filebeat log

2016-10-24T11:28:58+02:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=2 publish.events=1192 filebeat.harvester.running=2 libbeat.logstash.publish.read_bytes=708 libbeat.logstash.call_count.PublishEvents=6 libbeat.logstash.published_and_acked_events=1150 filebeat.harvester.started=3 registrar.states.update=1192 registrar.writes=6 filebeat.harvester.closed=1 libbeat.logstash.publish.write_bytes=97102 libbeat.publisher.published_events=1150
2016-10-24T11:29:02+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: c:\Program Files\Filebeat\logs\filebeat
2016-10-24T11:29:12+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: c:\Program Files\Filebeat\logs\filebeat
2016-10-24T11:29:22+02:00 ERR File is falling under ignore_older before harvesting is finished. Adjust your close_* settings: c:\Program Files\Filebeat\logs\filebeat

log are correctly read and parsed: I read the exact last line in kibana
but filebeat says it cannot close it

it seems to me like its internal state is broken

the configuration is the following

- input_type: log

      paths:
        - c:\Program Files\Filebeat\logs\filebeat
      fields:
        catena: 18
      fields_under_root: true
      ignore_older: 24h
      clean_inactive: 30h
      document_type: log_filebeat

Hm, I see. I would strongly not recommend doing this as it can lead to an infinit loop on errors. If you want to send filebeat logs, I would recommend you to use a second filebeat instance.

Still I'm a little bit confused that the file is falling under ignore_older even though it seems to be constantly updated. Could you check the modtime of the file? As filebeat is not closing the log file it writes to, on Windows it could be that the modtime is not updated.

Yes I know your doubt, but since I am not using FB in debug mode there is no infinite loop, every X seconds I have an error message at most. Am I wrong? any way for debugging it is very useful now.

That's exactly the problem, I'm struggling on it

you are right: in windows the last update time is 3 days ago
but as you said FB has it's own internal modtime, right?

Filebeat has an internal mod time but that does not have any affect on its own log file as it is just a normal log writer and there is not direct connection to the harvester. So this log file is treated as any other log file.

I need to check more in detail on the way we write to the log file. Can you check what happens if you stop filebeat? Will the modtime of the logfile be updated? This should be the case, as the file handler will be closed.