Hello. Is it possible to assign an ingest pipeline for the Elastic Agent? ideally we'd like to push this out to the entire fleet as well. I've tried below and it does not seem to have any effect.
Let me know if you have any questions. Thanks!
Hello. Is it possible to assign an ingest pipeline for the Elastic Agent? ideally we'd like to push this out to the entire fleet as well. I've tried below and it does not seem to have any effect.
Let me know if you have any questions. Thanks!
I've found the solution and feel it may help others who have this problem in the future.
Didn't realize that the Elastic Agent automatically builds data streams that are processed by preset ingest pipelines. I edited the existing pipeline to capture the organization data that we wanted, and know now that all data received from the agent goes through these pipelines.
Hi @bsanderRMG Great to hear you found a solution. One thing you should be careful about is to not modify existing data streams related to an integration. So for example data streams created for nginx or system. The reason is, that as soon as a package gets upgraded these are overwritten so if you modified these, yours get overwritten
If you just used the logfile package and set your own dataset, then you should be fine.
I had a similar challenge when adding the "Custom Logs" Integration (for simply tailing log files) to my fleet. This will not (at least in my case) create a dedicated Ingest Pipeline for those logs, even though it also creates a data stream.
I managed to solve this by manually creating the pipeline and then adding the pipeline name in the "Advanced Options" part of the Integration settings:
@Grim Welcome to the board and sharing this trick. I must confess I missed that this is one more option that it can be done. With the new indexing strategy (I just did a talk on it here yesterday: https://www.youtube.com/watch?v=ls1O-gB-Voo) we try to define the processing on the data stream itself instead of on the request. Of course, both is working.
So in your case, even if you specify the processing as part of the custom config, I would encourage you to modify the dataset name to your needs instead of using generic
as the default.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.