Date Filter not working with IST

Hi,

I'm using logstash 6.5.4. Below is my sample log & input file -
Sample log -
2019-01-12 13:10:38 IST,9898989898,HHH-444
2019-01-12 13:11:38 IST,9898989897,HHH-555

filter{

        if "test111" in [log_type] {
                csv{
                        columns => ["time","Msisdn","segment"]
                }
                date {
                match => [ "time", "yyyy-MM-dd HH:mm:ss ZZZ" ]
                target => "log_timestamp"
                remove_field => "time"
                }
        }
}

instead of ZZZ in match I have tried 'z' 'Z' and left it blank too. I got index in Kibana but getting only option of @timestamp in Time Filter field name.

What am I missing to see "log_timestamp" in Time Filter field name?
Please suggest. Thanks

Hello @Harsh_Sharma,

Please use
timezone => "Asia/Kolkata"

after target

Regards
Shrikant

Hi @shrikantgulia

I have already tried it & checked once again now too but same result.

filter{

        if "test111" in [log_type] {
                csv{
                        columns => ["time","Msisdn","segment"]
                }
                date {
                match => [ "time", "yyyy-MM-dd HH:mm:ss ZZZ" ]
                target => "log_timestamp"
                timezone => "Asia/Kolkata"
                remove_field => "time"
                }
        }
}

in Time Filter field I'm getting default option of @timestamp.
I'm struggling how to handle this 'IST' part.
Sample log -
2019-01-12 13:10:38 IST,9898989898,HHH-444

@Harsh_Sharma @shrikantgulia

Please confirm that his works.

It is my understanding that IST is one of the ambiguous abbreviations and so the Java Joda library that we use behind the scenes can't determine clearly what offset to use. See https://www.timeanddate.com/time/zones/

In the past, I have advised people to replace the IST with Asia/Kolkata before the date filter attempts to convert the string.

input {
  generator {
    lines => ['2019-01-12 13:10:38 IST,9898989898,HHH-444']
    count => 1
  }
}

filter {
  csv{
    columns => ["time","Msisdn","segment"]
  }
  mutate {
    gsub => ["[time]", "IST$", "Asia/Kolkata"]
  }
  date {
    match => [ "time", "yyyy-MM-dd HH:mm:ss ZZZ", "yyyy-MM-dd HH:mm:ss Z"]
    target => "log_timestamp"
  }
}

output {
  stdout { codec => rubydebug }
}

Gives:

{
          "time" => "2019-01-12 13:10:38 Asia/Kolkata",
      "sequence" => 0,
       "message" => "2019-01-12 13:10:38 IST,9898989898,HHH-444",
          "host" => "Elastics-MacBook-Pro.local",
       "segment" => "HHH-444",
      "@version" => "1",
        "Msisdn" => "9898989898",
 "log_timestamp" => 2019-01-12T07:40:38.000Z,
    "@timestamp" => 2019-01-22T11:46:56.321Z
}
2 Likes

FYI https://github.com/logstash-plugins/logstash-filter-date/issues/128

Hi @guyboertje

Thanks for the update. Finally got success after making a little bit change.

csv{
                        columns => ["timelog","Msisdn","segment"]
                }
                mutate {
                 gsub => [
                        "timelog", " IST$", ""
                 ]
                 }
                date {
                match => [ "timelog", "yyyy-MM-dd HH:mm:ss"]
                target => "log_timestamp"
                remove_field => timelog
                }
                }
        }

OK, that works too but you should now set the timezone in the date filter as the time portion is definitely not UTC.

(For future readers) You can remove the TZ abbreviation only if all your timestamp strings are expressed in one timezone. If you get a mix of TZ abbreviations (as one might get in global centralised logging) then removing the TZ abbreviation is problematic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.