If else condition for pipeline filter. How to normalise data before making into Common schema?

We got certain sources coming from syslog servers (different type of data) in a single pipeline. I wanted to split the single pipeline into multiple datasets before applying the common schema
For instance, the syslog dataset currently comes up with auditd, /var/log/secure, /var/log/messages etc and I've written grok patterns for each of them in a patterns directory

In the logstash pipeline layer, i'm trying to do something like below, but not working

filter  {
  if [log.file.path] == "/var/log/*audit*" {
    mutate {
      add_field = {
            "datatype": "nix_audit"
       }
     }
   }
  if [log.file.path] == "/var/log/maillog" {
    mutate {
      add_field = {
            "datatype": "nix_mail"
       }
     }
   }
}

But this is NOT adding datatype field. I was planning then to apply the pattern for "nix_audit" datatype, so all the fields populate as per the pattern and is maintained only once

Update1:
I even tried as per [link] but still not working with nested field structure. (https://www.elastic.co/guide/en/logstash/current/field-references-deepdive.html)

      if [log][file][path] == "/var/log/maillog" {
        mutate {
          add_field = {
                "datatype": "nix_mail"
           }
          }
       }

Marking as closed as the second option worked (i.e. [log][file][path] =~ "/var/log/maillog*" )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.