Default ingest pipeline overwritten

Hi,

I created an index template `logs-{dataset_name}-default' as well as setting up a data stream. I also setup a default ingest pipeline for this index.

However after a number of days (and maybe coincidentally an Elastic cluster upgrade) my default ingest pipeline setting got overwritten from my own to
logs-dev-default@2.1.0 which appears to be a manged pipeline?

{
  "managed_by": "fleet",
  "managed": true,
  "package": {
    "name": "log"
  }
}

I was wondering if anyone knows a reason why this might be the case? This caused my logging flow to break as my logs weren't being processed by my custom ingest pipeline.

Thanks,

Jason

Hi @Jasonespo What version are you on?

We're on the latest version 8.11.1. I have since added the default ingest pipeline setting to my index template instead of direct to the indices that are created. I think this is the solution. Also forced my template to be used instead of logs-* by setting the priority to 500.

Still would be good to know why it got overwritten. I think what happened is that my index was using the logs-* index template and so adopted the lifecycle policy. After X days I believe the index rolled over and the new index created no longer used my settings?

Hi @Jasonespo So I will do my best to explain as I understand .....

So the default ingest pipeline IS managed by Fleet Integrations and that is because that is the pipeline that is used by integrations, say like nginx etc that have OOTB ingest pipelines ... so that base pipeline IS managed but that pipeline is basically empty for your custom logs

So, for your "Custom" Logs you should use the @custom pipeline that will NOT be overwritten

so here is my example Datastream
logs-mydataset-mynamespace

in this case, you would use / create the "logs-mydataset@custom",

which would have your custom pipeline and would NOT be overwritten

Hope this makes sense...
This is the
logs-mydataset-2.3.0

[
  {
    "pipeline": {
      "name": "logs-mydataset@custom",
      "ignore_missing_pipeline": true
    }
  }
]

1 Like

Ok that makes sense. Thank you! :slight_smile: What I have mentioned above does work for my use case at the moment, but I will keep what you have said in mind if it breaks.

I'm almost certain that my problem was that I was using the logs-* index template without realising and not my own one!

Right, but if you use custom logs integration with Elastic Agent it will always "Manage" parts of it... the key is your custom dataset

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.