I'm concerned about using Filebeat to gather information from multiple laptops running macOS, as some of them might be connected to cellular networks and it is therefore risky that huge logs might imply extra costs for the users.
Is there a clever way to limit transfers to, say, 100k per hour, or something like that? I thought about limiting the bulk_max_size, but again, I don't think that would help, since data will be transferred no matter what as long as the harvester is reading files.
Thank you Adrian. Unfortunately all the options you mention (except using the OS to reduce bandwidth) are unpredictable in that high amount of data will eventually mean high data transfer.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.