Do not have any two prospectors report the same file. The files state/offset is currently stored global and these two prospectors will interleave.
You can use the when.equals condition with match or regex, to check a log message starts with any of these, and then ship do to the respective pipeline. Are the logs that different, that you really need a different pipeline? e.g. in on can configure multiple patterns in the grok processor.
The lines in one file can be indeed very different. One line type can have 80 fields while another can have only 7 fields. And there are over 20 different types... The first few letters in a line indicates its type.
Right now I am using ingest node instead of Logstash. Will using multi-patterns in grok processor slow down the ingestion speed? Does grok try to match the patterns one by one until one is matched? Should I actually consider using Logstash with Filebeat?
Right now I am using ingest node instead of Logstash. Will using multi-patterns in grok processor slow down the ingestion speed?
No idea how grok is executed in Elasticsearch Ingest Node. But I'd assume it's one by one. Aynway, you having just 2 patterns shouldn't be much of a concern.
Which filebeat version (match was introduced somewhat later, use regex )? Any logs?
The syntax for the when clause is when.<condition>.<full field name>: <value to compare with>. You have used when.<condition>.<condition>: <value to compare with>. Both, equals and match are conditions.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.