How to index even on failed pipelines

I have an ingest pipeline "filebeat-7.2.0-apache-access-default" and others, however when my document doesnt match something it doesnt get indexed. Is there any way that i can define that even if any pipeline fails index the document anyway ?

preferrably a general setting for all

There are several options for this, see https://www.elastic.co/guide/en/elasticsearch/reference/7.4/handling-failure-in-pipelines.html

@spinscale since i have no control over these pipelines (they get created by filebeat etc...) what i am looking for is a way to not have to change them so that the "index even if it fails" mentality would be applied to all pipelines. is there anything of that kind ?

do you care to explain your use-case a little bit further? Are you trying to manually index data or use another filebeat instance? What kind of data are you indexing? I'm curious where this starts to differ from the 'general' filebeat use-case or if there is an general issue on the filebeat/elasticsearch side of things.

Thanks for helping!

@spinscale I'm using a typical ELK stack with beats (all OSS) and adding all the pipelines etc.. via the setup command. beats (filebeat for example) is collecting (via the apache module and others) and sending to logs to Logstash.

What i am verifying is that my apache logs are not passing the pipeline saying that some fields dont exist and therefore are not being indexed. I dont want this to happen, if it passes the pipeline, fine, if not then atleast they should be indexed that i dont lose them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.