This is more or less a duplicate of Fleet Custom Logs Ingest Pipeline, but I still find it hard to figure out which steps I have to follow. I would really appreciate better documentation or some useful directions.
I am also trying to ingest a log file using the custom log integration in Fleet. Is there a way to configure a custom ingest pipeline as well so I can parse the custom fields in the message field using grok patterns?
We are planning some changes to how we setup the templates to make user customizations much easier to add (see kibana#121118), however this won't fully solve the custom ingest pipeline case.
The best option I have right now is to edit the existing logs-log.log@custom component and then rollover the data stream:
Create a new ingest pipeline
Edit the logs-log.log@custom component template to add the default_pipeline index setting to point to the newly created ingest pipeline
Rollover any existing data streams that match logs-log.log-* to apply the new settings using the Rollover API
how to use ingest pipeline either via the "Custom configurations" in the integration settings, and if its possible to just use the same configuration as in filebeat. eg can I just throw in output.Elasticsearch.pipeline: 'somepipeline'
It would be better just adding a optional field in the custom logs integration settings where it's possible to lookup existing pipelines or telling users a ingest pipeline is required etc.
What you did works, but using the pipeline setting on the input itself is something we don't encourage. The feature is there because we inherit it from beats directly. Instead the pipeline should be set in the settings on the data stream. But as discussed above, we don't provide everything needed here yet.
You know pretty well how the stack works and I'm sure it would be simple for you to change from a setting in the yaml to a setting on the data stream as soon as we have it. What I'm worried around documenting it is that many users will start using it and then eventually will be stuck. Maybe we can mention exactly this issue in the docs?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.