I have the following use case:
Using Filebeat to tail a single log file. Based on the type of log entry detected I need to route the data to a different custom processor.
For example:
Lines that start with A would be stored as is
Lines that start with B would be pre-aggregated (count the number of lines of this type), then stored somewhere else
Note that I'm using Filebeat as a "go" library.
Currently what I'm doing is define a single prospector with multiple entries on "include_lines", then evaluate conditions on the Outlet to determine how to route the event.
Ideally I would be able to evaluate once instead of twice. Is that possible?
You want to route events to different outputs (logstash, elasticsearch) or do you need some way of filtering/processing only? The later can be done in configuration only. Processors support conditions (regex, string match, ...).
Some filtering can be done in filebeat using processors with conditionals (See docs with examples and list of available processors).
Filebeat does not support event routing. The closes to event routing is a configurable index name for the Elasticsearch output (or kafka topic). If you need more routing capabilities based on actual contents, you would have to use Logstash.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.