I'd like to leverage Filebeat so it'd fetch our o365\azure\aws logs using the various modules .
How would it handle very high volumes? Is there any benchmark for example events\MBs per second?
Is there any way to maintain a cluster of Filebeat clusters?
Did anyone encounter situation where one Filebeat per data type (o365\gsuite\azure audit log) wasn't enough?
Seems like in S3 sqs specifically horizontal scaling is feasible, but what about the others?
This is simply an attempt to estimate the options we will have if we choose to use this form of ingestion... understanding what would be the upper limits.
Thousands of events per second if I'm thinking of O365\Gsuite for example?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.