Filebeat 5.0 with multiline, split event data to two events

Hm.... The \u0000 is the unicode NULL character. It's literally a byte with value 0 written to the file. Can you check with hex viewer there are some 0-bytes in the log file? Normally it's used similar to C-style \0 to indicate end of string. But so many consecutive NULL-characters...

Any 'empty' buffers being flushed somewhere or some buffer offsets increased + flushed before putting the content in place ... ?

Is log file using some binary formatter? The $ characters look like some additional separator and this value 1474290082222 almost looks like a timestamp to me.

IT is in the Elasticsearch, it is what the Ingest node receives from Filebeat.
So, can it be that the event is splitted after a timeout, also for a single line event ?

The Null's reminded me of this: https://stackoverflow.com/questions/6814404/java-inputstream-read-methods-returning-ascii-nul-characters-for-file-in-a-nfs Any chance that is happening in your case?

I am trying to use timeout: -1
In the Filebeat config file, I am getting the following message:

Exiting: Error in initing prospector: negative value accessing 'filebeat.prospectors.0.multiline.timeout' (source:'/opt/sw/filebeat/ymls/qa-app01/filebeat_applogs.yml')

Ori

Sorry, I think timeout: 0 should disabled timeouts.

but timeout will not protect you from weird buffering effects, as file pointer is already advanced once this happens.

I set it to be timeout: 0, It still splits events........

If I understand you correctly, split events means an events starting with all \u0000? You using network shares? NFS (via udp or tcp) or SMB?

Events being split normally shows in event starting right in the middle without all those zeros.

The \u0000 is not due to filebeat splitting events, it is due to the OS presenting all those zeros to filebeat on read-time, which might be caused by data being processed out of order.

Maybe one can implement a workaround in filebeat checking for the NULL character, backing of a little and retry reading in the hope the missing content has already been received.

Hi Steffens,

The split event, is not necessarily starting with \u0000.
It split in the middle of the message text.

I am using CIFS in linux to access log files on windows app servers.

Ori

did you disable the timeout? If so, you still see incomplete messages without NULL characters?

Yes, I set it to be timeout: 0

There are still splitted events. Why is that ?

Ori

Hi Ori

If I remember correctly, the hole issue started because you were running filebeat on Windows and it used too many resources. Then you started using network drives and reading the files from a windows machine which lead to many other interesting issues. As we are not in control on what exactly happens on shared network drives I would suggest that we get back to the windows resource issue instead of trying to debug issues related to network drives. If there is unreasonable high resource usage on Windows, I'm sure we can find a way to fix it. If strange things happen on network drives which don't happen on non network drives, I'm not so confident.

WDYT?

This topic was automatically closed after 21 days. New replies are no longer allowed.

Hi Ruflin,

I tried to play with the scan_frequency parameter, it did not help.
The problem is that we have 22 different log types on the same server.

Currnetly working with one dedicated WINDOWS machine as a Filebeat node, which connects to other WINDOWS machines using the FQDN such as: \app-server\logs\type1\type.log.*
seems to be working fine.
One execption is for multiline events, which gets splitted although setting timeout: 0

Ori

Any chance that we could get some memory profiling from your filebeat instances on windows to further investigate the memory issue?

Sorry for the delay, a lot of holidays.....

We got a decision to use one central machine to serve as the Filebeat server, which access the remote APP Servers using FQDN.
There are few problems with some logs, but I think it is related to the way they are managed and generated.
One exception is for the particular type which still being splited up several times, although I used timeout=0 in the config file.

I think I found when it happens....
Seems like the last event in the logfile.
The logfile is being renamed and a new log file is created.

file.log
file.log.1
file.log.2

The splitted event holds a marker of the end of the file (few rows which are being printed).
So probably not a timeout matter......

Why renaming a file can cause it ?
Filebeat should be able to overcome it.

Ori

In general renaming a file does not change the file handler. So the harvester itself doesn't even get notified that a renaming happened. This is the case for local disk. TBH I'm not sure if the handling is exactly the same for a mounted disk as the renaming command is happening "remotely" perhaps it could be that the file handler is closed (honestly I would have to dig here into the details of these file system to full understand if anything special happens). Do you see anything special in the filebeat log files like that the handler was closed or had an error?

Hi Ruflin,

Thanks for the reply.

  1. What should be the log level ?
    Is the default enough ?
  2. I have checked on the Resource Monitor which comes with windows,
    Also increased the CLOSE_INACTIVE value.
    on the Resource Monitor, I can see the filebeat holds the handles to all files, including renamed files.
    Is there a way to check if it is considered a new file after renaming it ?

I am accessing the files using FQDN:
\Server-Name\Share-Name\LogType1\app.log*

Filebeat runs on Windows Server, which accesses log files on remote Windows Servers.

Thanks,

Ori

Hi Ruflin,

I made some tests by myself.
Took on file with a multiline event and within a loop spilled its content into a larger log file.
after it reached a certain size, renamed the larger log file.

Created 4 such log files.
file.log -> file1.log.4
file.log -> file1.log.3
file.log -> file1.log.2
file.log -> file1.log.1

I tested the following scenarios:

  1. Letting the Harvester close the file, put the one event one more time and then rename the file
    All Events were sent to Elastic successfully.
    I could see in log file messages of detecting rename of a file
    but it was not there for all files.
  2. Letting the Harvester close the file, rename it and then put the one event one more time in the renamed file
    All Events were sent to Elastic successfully.
    I could see in log file messages of detecting rename of a file
    but it was not there for all files.
  3. Same as the first one, only that let the harvester some more time to close the file, and setting a random sleep between each write into the large log file
    All Events were sent to Elastic successfully.
    I could see in log file messages of detecting rename of a file
    This time I could find the message for all files.
    All data was from the file.log, except the 4 last events per loop, which is written to the file before it is being renamed, and the Harvester then picks it up
  4. Same as second one.
    All Events Except the 4 last ones were sent to Elastic successfully.
    I could see in log file messages of detecting rename of a file
    This time I could find the message for all files.
    The offset in the log file indicated that it did not recognized the last event.
    Only after a while, when restarted Filebeat, it sent the 4 last messages.
    I also tried to put some more events but did not help, only the restart of filebeat sent them all.

So, It seems that it does recognize files which are being renamed, also by accessing using FQDN.
What worries me is the last test, which I do not know why it did not recognize the files were updated.

I am using version 5.0.0 Alpha 5, will also try to use the 5.0.0 GA release.

Let me know if you are interested in the scripts and logs of my tests, and where I can place them.

Unfortunately, I wanted to find what is causing the filebeat not to send the last event (split it), but I could not find it with my tests.
So, we still encounter the problem, without any solution......

Ori