Server Spec for Filebeat

We are trying to use Filebeat in order to send the log data
from the log collection server to the SIEM.
The log data would amount to 100GB per day,
the average throughput is about 1.2MB/s (9.5Mbps).

Q1.
Would you tell us the recommended server specification
(memory, HDD, CPU) for the log collection server of Filebeat ?

Q2.
Can Filebeat as a software send the log data of 100GB/day
(average throughput 9.5Mbps) ?

Q3.
Can a single process of Filebeat send the data of 100GB/day ?
Do multiple Filebeat processes have to be run ?
If yes, how many Filebeat processes are needed ?

Is the "log collection server" an Elasticsearch cluster?

What is the retention period?
What are the different type of logs that will be ingested?
If so, what are the different amount per day per type?
Will you go for a virtualised environment or other?

We thank you for the reply.

Is the "log collection server" an Elasticsearch cluster?

No, the "log collection server" is not an Elasticsearch cluster.
"The log collection server" collects log data from devices.
We assume a Linux server, and a syslog process collects the log data.
After that, we assume Filebeat running on the same server
will send the data to the SIEM.

What is the retention period?

We assume the log data is retained in the "log collection server"
for a minute at most. In our design the data should be sent to the SIEM
as soon as possible.

What are the different type of logs that will be ingested?
If so, what are the different amount per day per type?

The logs are created by network security devices, ex. IPS, FW.
The number of device is 8.
The total data size will amount to 100GB per day.

Will you go for a virtualised environment or other?

Yes, we set up the server for Filebeat as a virtual machine.

with best regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.