Filebeat and data.json file, field meaning and service tunning

Dears,

The filebeat service have file data.json in /var/lib/filebeat/registry/filebeat. The content of this file looks like:

[
"soure":"/apps/logs/my_apps/file.log","offset":"308017933","timestamp":"2020-07-23T07:20:38.120400049+02:00","ttl":-1,"type";"log","meta":null,"FileStateOS":"{"inode":"271256140,"device":2081}},
"soure":"/apps/logs/my_apps1/file.log","offset":"7111033","timestamp":"2020-07-23T07:20:38.221412049+02:00","ttl":-1,"type";"log","meta":null,"FileStateOS":"{"inode":"213256140,"device":2081}},
...
]

Is there any description of these field in Elastic documentation? I would like to know what means this filed: offset. I suppose that this is some mark where Filebeat finished read the file. But what exactly this means? Bytes or something else?

Why I asking about it? Because I have a problem with my Filebeat process which don't send all information from the big log. Big log means that the file grows about 400MB every 1 hour. At the end of day has about 9-10GB. With small logs files there is no problem.

The configuration of Filebeat is default right now, ie: one worker, default bulk_max_size, default scan_frequency, no more logs. It will be changed in the Monday, more logs, possibility to change configuration, etc. Right now I cannot change the configuration and restart the service.

One more important from my point of view information. The Filebeat is started on the machine with 4 CPU and have CPUQuota set on 5%. I suppose that CPUQuota is bottleneck.

top -p show for this process always 5% and sometime higher value: 8-15%.

What I observed? During the night when the log file has only 300MB or less all data is send to ELK without problem.

Do you have any experience with sending data to ELK from big logs? Can you give me some advice how to tuning Filebeat?

Best Regards,
Dan

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.