We have logs per hour rotation and logstash configured to grab/filter logs. Sometimes, at the end of the hour logstash gets an event that looks like cut log message (beginning exists, no correct ending) so the event can't pass filter rules. It happens few times per day and time is always like :59:59,, for example:
19:59:59,757, 17:59:59,810, 14:59:59,916 etc
The event itself pass though logstash anyway properly filtered and without _grokparsefailure and further events from the same log file too. The problem is I see the event two times in elasticsearch as result: one is cut version, not parsed with _grokparsefailure, second one is completely parsed full document as expected. When I remove elasticsearch index and make logstash reindex related log file - everything filtered properly and no _grokparsefailure at all as result. So I believe it happens only "in-action" because of log files rotation by log4j. I think it's not log4j issue so, but it's about the way logstash track files changes.
I wonder if I can make logstash handle file rotation clever way so it won't use cut version of log message. I see how I can achieve it changing my logstash config, I wonder if it's possible with global logstash configuration. Any ideas?
Thanks you in advance,