Hi, in our project we store all our logs in one log file with the log pattern:
timestamp-server-id-loglevel-program-module-...
I set up a filebeat-logstash-es stream and i want to apply different grok patterns to different log levels.
The problem is, since i can't(shouldn't) define multiple prospectors over one file, I need another way to get the log levels before send to logstash.
First i want to use processors to find out if a log contains the level keyword. But i can't find a support processor that allows me to add additional fields, which is very easy to do in the prospectors config using:
fields:
level: log
fields:
level:error
..etc..
(the [include fields] processor can't add fields and [rename] processor can't change field's value )
So my questions are:
Can I define more than one prospector over one file?
If not, how can i get the log level field before send it to logstash? Is there a support processors that allows me to add fields when the message contains level keyword?
You should spend the time parsing the logs in Logstash instead of trying to pre-parse in Filebeat. Filebeat is really more designed to send the logs upstream.
use a multi-stage grok or dissect pattern match. Ie.
dissect example:
There's quite a bit you can do with logstash, the tradeoff is how much CPU/Memory is consumed by the filters in your pipelines. Glad to see that worked.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.