Question: define default pipeline with different patterns depending on log.file.path

Hi,

I would like to have my logs ingested by two different pipelines, depending on the log file which is the source of the document.

  • Everything coming from /var/log/a.log should be processed by a_pipeline
  • everything from /var/log/b.log by b_pipeline
  • for everything else I would like to have the document unaltered (no ingest pipeline)

I am trying something like:

PUT _ingest/pipeline/log_pipeline
{
   "description":"A pipeline of pipelines for log files",
   "processors":[
      {
         "pipeline":{
            "if":"log.file.path == '/var/log/a.log'",
            "name":"a_pipeline"
         }
      },
      {
         "pipeline":{
            "if":"log.file.path == '/var/log/b.log'",
            "name":"b_pipeline"
         }
      }
   ]
}

My a and b pipelines look something like this and seem to work as intended during simulation.

PUT /_ingest/pipeline/a_pipeline
{
    "description" : "A pipeline",
    "processors" : [
      {
        "grok" : {
          "field" : "message",
          "ignore_missing": true, 
          "patterns" : [ "%{TIMESTAMP_ISO8601:tstamp};%{GREEDYDATA:payload}" ]
        }
      }
    ]
}

However when add log_pipeline as a default_pipeline to my index, I don't find any messages at all in my index. Is my syntax with "if":"log.file.path == '/var/log/a.log'" wrong? what would be the correct syntax in order to delegate to different pipelines depending on the log file?

I think I found it myself, I had to replace log.file.path with ctx.log.file.path

1 Like

Hi @Christos_Gitsis

Good News can you post your solutions so others can see?

Yup have to use the ctx. in the conditionals

Also this is a great pattern.

I always create a top-level pipeline that can then conditionally call sub-pipelines, and if you build modular you can also use in other pipelines... its almost like its code :wink: