I have a custom log format that is being sent to stdout. Filebeat is correctly capturing this output. I would like to apply a grok pattern against these lines and have them parsed/indexed. Is a pipeline, a processor, or something else the best way to do this? I only want to apply the processor or pipeline against the lines that have a specific string (e.g. "ERRORLINE").
If you are already using Logstash then I would recommend doing it there.
If you don't ever plan on using Logstash then I would recommend an ingest processor.
Thank you! We do not use logstash.
I have multiple log types that are sent to stdout. How do I configure filebeat to run the ingest processor if it sees a specific string (e.g. ERRORLINE)? Or is that something that is configured on the ES side?
Check out the pipeline documentation.
So I would setup a pipeline in ES to process this and use the Grok processor. But you can also do some conditionals so only run that processor based on that specific condition you are mentioning. That way you don't waste processing on data that doesn't need to go through that pipeline.
Thanks! That got it.
For those interested in a solution. For background, our setup is kubernetes. Filebeat runs as a daemonset and one of our pods was producing two log types. The first log type has an existing module. The second log type was custom. Both are written to stdout. The solution was to:
- Create a pipeline with a grok ingest processor on the ElasticSearch side.
- In filebeat config map add the following:
output.elasticsearch:
hosts: ['elasticsearch-url']
indices:
- index: "customformat"
when.contains:
message: "ERRORLINE-CUSTOMFORMAT"
pipelines:
- pipeline: "customformat-processing-pipeline"
when.contains:
message: "ERRORLINE-CUSTOMFORMAT"
The custom log line pipeline is only applied in specific cases to save on processing. Also, these documents are written to their own index.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.