But this only happens when the file gets updated. When the file gets initially loaded, this does not happen.
Anyone a suggestions what could be the issue?
With pre-existing log data contained in the files the issue won't occur because Filebeat is able to read multiline data as fast as it can. Filebeat does not have to wait for the start of the next multiline block which is what normally triggers it to consider the previous multiline block to be complete and send it in an event.
This is probably caused by the process writing the file to not be flushing frequently enough. To account for this you can increase the multiline timeout value in Filebeat (see multiline.timeout).
yes this process is a real "slow mover" so we´re getting events every 5 minutes. I tried to increase timeout up to 7 minutes but still facing the same problem.
Can you try a really long value just to see if it fixes the issue? Worst case -- the most recent event will be delayed a bit.
multiline.pattern: '^(*.JOB)|Cooldown'
multiline.negate: true
multiline.match: after
multline.timeout: 1h
Do your events have a predictable string that can be used to flush? If so you could try the multiline.flush_pattern. (original pull request for the feature)
Unfortunately the extended timeout did not work either. There´s no predictable string for the flush_pattern - message is quite dynamic except the start pattern.
Do you have somethingin mind why it´s just affecting the first character (and only if a new event is added) ?
sorry for the late reply - Now the timeout disappeared in the error log. I can´t find any further error message but the issue with the missing char still occurs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.