Too many logs for filebeat?

I'm trying to run a POC for Filebeats on capturing logs from Signiant file delivery. We get files in and we send files out, and there are processing steps in Signiant. For each event, for every process, there is a log file being created and updated throughout the process, but it's not in debug mode.
Regardless, the log files are a lot and they are massive; about 80.000 a day, ranging from 50 to 70GB a day.
In the past we had tried to set up one Filebeat to capture it. It did not do what we wanted (I think it was the version 6.x era) and we believed that one Filebeat, regardless of how many workers we configured within the limits of the documentation, were just not enough.

As we are back iin earnest now with a POC (proof of concept), having and working with Elastic 7.17, what would be the best scenario for Filebeat?

Could you please provide more info? Are you rotating the files? What is the biggest message you send with Filebeat? What is the median? Are you doing any processing in Filebeat?
What issues do you see in Filebeat?

Filebeat has many options so you can adjust everything to fit your use case. If you provide more details I can provide a few recommendations.

1 Like

No rotation. Each filename is unique. There is maintenance deleting the oldest when the size of the mount is over 50GB.

I do not know what the biggest message of Filebeat is. Also I do not know what the median (size?) is.

No processing in Filebeat, in fact that is what we wanted to find out; what would the default behaviour be? We did not get that far.

Running the filebeat in the foreground and take a look at the monitoring output to understand the log lines processing rate.

  • How long per log line?
  • Do you index the full log message or not?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.