Issues with Logstash Conditionals inside filter

I have been trying to get a CSV file parsed into logstash and it works fine. How ever I need to add either a new tag or a new field based on certain conditions.

I have validated that in the logs I have the text that matches the conditions (Below). but it doesn't add any tags nor fields based on my condition. Any help will be much appreciated. (Both the commented section of conditions and uncommitted simpler one doesn't seem to make a difference)

input
{
file {
path => "C:/ELK/Data_Landing/*.csv"
start_position => "beginning"
sincedb_path => "NUL"
}
}

filter
{
csv{
separator => ","
skip_header=> true
skip_empty_columns => true
autogenerate_column_names =>false
#31-01-2019 10:13
columns => [ "Severity" , "AlertReceived" , "Node" , "Application" , "MessageGroup" , "Object" , "TemplateName" , "ConditionMatched" , "MessageText" , "OpsAck" ]
}

            #if [Node] =~ /^"tm"*/  or [Node] =~ /^"tq"*/ or [Node] =~ /^"bp"*/ or [Node] =~ /^"ob"*/ or [Node] =~ /^"le"*/ 
            #{
            #             mutate { add_field => "ProductType" => "EXXXXX"}
            #}
            #else if [Node] =~ /^dv*/  AND [Application] != "XXXX"
            #{
            #             mutate { add_field => "ProductType" => "SXXX"}
            #}
            #             else 
            #             {
            #                             mutate { add_tag => "Undefined"}
            #             }
            
            if [Node] == "X-XXXX-XXXX.XX.XXXX.XXXX.local"
            { 
                            mutate { add_field => "ProductType" => "SXXX"}
            }
            
            mutate{
                            gsub => ["AlertReceived", "/", "-"]
                            #gsub => ["AlertReceived", " ", ";"]
            }
            
            date {
                            match => ["AlertReceived", "dd-MM-yyyy HH:mm:ss"]
                            target => "@timestamp"
            }

}

output
{
elasticsearch
{
hosts => "localhost:9200"
index => "ito"
document_type => "csv"
}
stdout { codec => rubydebug }
}

Sample Messages

It would help to have sample data.

I usually debug these sort of issues using the generator input.

input {
  generator {
    lines => [
      "stop,wait for it,go go go - man",
      "halt,hesitate,hurry along"
    ]
    count => 1
  }
}

filter {
  csv {
    separator => ","
    columns => ["red","amber","green"]
  }
  if [red] == "halt" {
    mutate {
      add_field => { "[intonation]" => "posh"}
    }
  } else {
    mutate {
      add_field => { "[intonation]" => "hipster"}
    }
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

Gives

{
         "green" => "hurry along",
       "message" => "halt,hesitate,hurry along",
           "red" => "halt",
      "sequence" => 0,
      "@version" => "1",
          "host" => "Elastics-MacBook-Pro.local",
    "@timestamp" => 2019-04-11T13:10:21.913Z,
         "amber" => "hesitate",
    "intonation" => "posh"
}
{
         "green" => "go go go - man",
       "message" => "stop,wait for it,go go go - man",
           "red" => "stop",
      "sequence" => 0,
      "@version" => "1",
          "host" => "Elastics-MacBook-Pro.local",
    "@timestamp" => 2019-04-11T13:10:21.896Z,
         "amber" => "wait for it",
    "intonation" => "hipster"
}

It makes my head explode that that works the way it does in a logstash conditional. I know Onigurama, ed, awk, perl, and csh all have different regexp syntaxes, but I thought I understood logstash. Apparently not.

I'm not sure it works as expected. The * asserts zero or more " characters meaning it is effectively /^"tm/

@Dilruwan_Madubashi10

The == equality operator will fail if there is leading or trailing whitespace.

For some reason I read the * as +. Head now un-exploded.

1 Like
Severity Alert Received Node
normal 28/02/2019-23:59:38 paXXXXXX
minor 28/02/2019-23:59:20 sp-XXX-XXX.columbus.stockex.com
minor 28/02/2019-23:59:16 paXXXXXX
normal 28/02/2019-23:59:16 paXXXXXX
minor 28/02/2019-23:59:13 paXXXXXX
normal 28/02/2019-23:59:11 paXXXXXX
normal 28/02/2019-23:59:11 paXXXXXX
normal 28/02/2019-23:59:02 paXXXXXX
normal 28/02/2019-23:56:18 op-XXXX-CCC01.XXX-ops.abc.com
major 28/02/2019-23:56:12 dv-XXXX-CCC01.XXX-ops.abc.com
major 28/02/2019-23:55:03 op--XXXX-CCC01.XXX-ops.abc.com

Sample data would be like this in a csv. I have made some changes as these are from a production environment. :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.