How to use multiple Ingest Pipelines in Elasticsearch?

Hi, I'm trying to switch from Logstash in favor of Elasticsearch Ingest Pipeline feature, and I'm having trouble with topic of multiple ingest pipelines. How does Elastic know when to use which pipeline?
I ship data with Filebeat and add tags for different sources - in Logstash I can use something like below to decide which filter should be used:

filter {
  if "something" in [tags]{
    grok {
      match => {
        "message" => [
.....
.....
.....

How do I do this in Elasticsearch? Is there any metadata field I should add on Filebeat level?
I couldn't find anything like that in documentation.

Perhaps take a look at this...

So you can create a top-level pipeline that then calls sub pipelines. Think about composable code.

Also any processor can have a condition as well.

You can base the condition on a tag just like you did in logstash.

Most everything else you need to know is on that page as well. So get it up and running and when you have questions on specific processors come back!

You need to follow the path suggested by Stephen, have a main pipeline and use conditional to direct your filter to other pipelines.

Another thing that you need to check is if you really can replicate your logstash pipeline using an ingest pipeline, some things that are pretty simple to do in Logstash may be pretty complicated in a ingest pipeline, one example of this is the translate filter in logstash which needs to be done using an enrich processor.

Other are impossible of doing with a ingest pipeline, like for example enrich from external sources.

1 Like

I found that you can specify pipeline name in Filebeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.