We are using fleet-managed elastic agents to send log events to elasticsearch. The custom log integration has been done as part of this process and the log events are passed through an ingest pipeline before the events are ingested into Elasticsearch. The pipeline uses a Grok processor with %{COMBINEDAPACHELOG} pattern for the apache access logs however the events are not getting parsed. Using other processors works just fine but the GROK one. I am attaching a sample of log event and ingest pipeline screenshot below. Can anyone help me with this issue?
Hi Ashish,
Thanks for your response. However, i need to use the custom log integration as i have few custom logs in addition to the apache access logs.
It Also Parses with the pipeline you supplied but you will need to additional work to properly set timestamp etc.. etc... perhaps try the Apache Integration first
But when we add integration, we don't have any option to add pipeline. Once the integration is completed, it created a managed pipeline, editing which gives a warning that it can break kibana. I tried adding the same pipeline later. However, the processors are not working.
So then just name your pipeline PUT _ingest/pipeline/logs-custom@custom
And it will be automatically called... this is the best way to do this...
OR...... you can also create all this with 1 call....
POST kbn:/api/fleet/epm/custom_integrations
{
"integrationName": "customapp",
"datasets": [
{
"name": "customapp",
"type": "logs"
}
]
}
GET kbn:/api/fleet/epm/packages/customapp
# if you want to clean up
DELETE kbn:/api/fleet/epm/packages/customapp/1.0.0
A couple of "house keeping"
First, it is generally not a good Idea to add to a Solved Topic as people tend not to look at them. You should open a new topic and you can refer to it.
Also, please try not to @ people directly with your questions... it is a community forum, open your topic and see if it will get answered..
All that said / all good....
most likely this is failing and since you set "ignore_failure": true it will just fail silently... and the failure processor will not get called.. for proper failure handling see here
I would recommend trying the _simulate API with your sample documents and see what is failing
If I test it with a document from index, parse happens succesfully.
However, after applying this pipeline, no more lines are arriving from the log files. If I remove this json processor, the lines are arriving again.
Any idea how can I debug it? Where could I find some log lines about this failing pipeline?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.