Hi, what are the config settings for filebeat to scan lines as frequently as possible.
So I will have two prospectors.
1- One that reads logs regularly, using default settings
2- One that reads "events" (separate file) as frequently as possible.
I understand there is scan_frequency but this is how often new files are checked...
What params exist to lower the ingestion and delivery of new lines? And yes I understand this can be CPU intensive, so I will tune for my scenario...
Filebeat tries to read until EOF as fast as possible. Well, subject to backpressure from the memory queue and outputs. Once EOF is reached it backs off before trying to read more lines. See backoffX settings.
The event queue buffers events. By default the queue is flushed once full or 1s after the first event filling the queue is received. Setting the flush timeout to 0 can improve latency if you only have a small amount of events. See Internal queue docs.
With a timeout of 0 the queue implementation tries to forward events immediately, but still can buffer up batches in case outputs are busy.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.