What strategy should I use for filebeat in case when specific log file is being constantly truncated from the beginning?
I use log files which are produced by third-party log system and I cannot maintain it. Produced files are being truncated after reaching 5 mb size but it keeps the same filename all the time. Will filebeat keep offset correctly if I set tail_file=true?
If a file gets truncated, filebeat will detect it and will start reading from the beginning again. As in all cases with truncation, this can lead to the problem that not all log files were sent before trucation, means data is lost. To make sure filebeat reads events as fast as possible to catch as many lines as possible before truncation, set a low backoff value. tail_files should be set to false as it doesn't help in this case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.