Legacy software I have to use works on Windows and writes a log in the following way:
it writes a lot of \u0000 (NULL) symbols into the end of file to preallocate space and then (when it needs to write a log string) consistently replaces NULLs with real data.
But Filebeat reads NULLs immediately when they appear and sends them to Logstash. Thereby Logstash receives a lot of garbage and most of the real logs are not sent at all.
Is there any solution of this issue?
I've found some topics about \u0000 and buffering like this:
Version is 7.3.1
Do you believe Filebeat is designed to manage such a preallocation? Do you know how it does it?
We use Proxmox for virtualization and have found out the following. Filebeat does work correctly when paravirtualized storage backend is old virtio-blk. But the described issue occurs when backend is new virtio-scsi. Could you suggest how it might affect Filebeat?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.