Fleet Nginx integration for customized log format

Hi,

With Nginx integration, I need to ingest customized logs (log_format). I need to define a custom Grok processor rule.
It is not possible to modify the " logs-nginx.access-1.23.0" pipeline so I want to define a new grok processor in ingreation UI in "logs-nginx.access@custom", but this one is never executed as the main pipeline fails with the message "Provided Grok expressions do not match field ..." due to the custom format.

I tested the solution provided in this subject (enable preserve original event) Ingesting Nginx access logs in custom log format
but "logs-nginx.access@custom" is not executed.

How ingest customized Nginx logs with Nginx integration and without using a logstash output ?

Note : With a logstash ouput, I can parse my custom log, but there is still an error during elasticsearch ingest pipeline with the Grok processor.

Regards

Dominique

Quick questions -

  1. Are you using filebeat to ingest?
  2. Could you share sample custom log which you want to parse?
  3. Hows your grok processor looks like?

Hi @dominique.bejean

Unfortunately, Turns out there is no easy way to do this today with the Elastic Agent Integration for ngnix (and others) ... apologies and yes this should be easier. We have had some internal conversations about this.

Against best practices, but probably the quickest approach is to edit the managed pipeline. Nothing really stops you from fixing the GROK ... I would Add another Grok pattern so that if someone sends regular Ngnix access logs, it will still parse.

So you could try to add that GROK as the first grok pattern in the managed pipeline ... and send your data directly from agent to elasticsearch

You will need to be aware when you upgrade the integrations to make sure you put your custom pattern back in.

Hi,
Thank you for your responses.
The solution for us was to customize the Nginx log_format without modifying the beginning but just adding new log items at the end of the log line. This way, the grok processor of the elasticsearch pipeline for the Nginx integration still match the log. We customized the pipeline by adding extra grok prosessor in the custom pipeline of the integration.
Dominique

1 Like

@dominique.bejean Excellent Solution Thanks for sharing!

In fact there was discussion of adding an additional GROK with just that a "catch all" at the end that then users could parse in their @custom pipeline.