With Nginx integration, I need to ingest customized logs (log_format). I need to define a custom Grok processor rule.
It is not possible to modify the " logs-nginx.access-1.23.0" pipeline so I want to define a new grok processor in ingreation UI in "logs-nginx.access@custom", but this one is never executed as the main pipeline fails with the message "Provided Grok expressions do not match field ..." due to the custom format.
Unfortunately, Turns out there is no easy way to do this today with the Elastic Agent Integration for ngnix (and others) ... apologies and yes this should be easier. We have had some internal conversations about this.
Against best practices, but probably the quickest approach is to edit the managed pipeline. Nothing really stops you from fixing the GROK ... I would Add another Grok pattern so that if someone sends regular Ngnix access logs, it will still parse.
So you could try to add that GROK as the first grok pattern in the managed pipeline ... and send your data directly from agent to elasticsearch
You will need to be aware when you upgrade the integrations to make sure you put your custom pattern back in.
Hi,
Thank you for your responses.
The solution for us was to customize the Nginx log_format without modifying the beginning but just adding new log items at the end of the log line. This way, the grok processor of the elasticsearch pipeline for the Nginx integration still match the log. We customized the pipeline by adding extra grok prosessor in the custom pipeline of the integration.
Dominique
In fact there was discussion of adding an additional GROK with just that a "catch all" at the end that then users could parse in their @custom pipeline.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.