Trouble matching timestamp

I'm having trouble getting the ISO8601 to match timestamp in "Date" filter.

Below is my filter and output config. I have tried using the commented out lines for "match" but not working.

filter {
if "CollectorStatus" == [type] {
date {
#match => ["system_date","yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"]
#match => ["system_date","ISO8601"]
#timezone => "UTC"
target => "@timestamp"
}
}
}
output {
file { path => "/tmp/logstash/stdout_%{+YYYY.MM.dd}"
codec => "json_lines"
flush_interval => 0
}
}

This is what my output looks like.

{"collection_date":"2017-04-26T19:45:00.000Z","system_date":"2017-04-26T20:13:00.000Z","ne_name":"YXCO41_FELDZ77-01","ne_model":"Z77","finished":1,"type":"CollectorStatus","collector_interval":900,"active_hdl":"19b","tags":["CollectorStatus","Z77","_dateparsefailure"],"@timestamp":"2017-04-26T20:15:01.777Z","abort":0,"collector_name":"YXCO41_FELDZ77-01","@version":"1","rpu_hostname":"sapl19"}

It is adding the tag "_dateparsefailure".
Based on JSON output I don't think my field name of "system_date" is nested, so it should be working.

Any suggestions?

Yeah, using the ISO8601 pattern should definitely work. Check the Logstash log for details about the date parsing failure.

Is there anything special I need to put into the conf or logstash startup to get more debug output from the date parser?

I've tried with the logstash set with log.level=trace, still isn't helping me. I also changed output to codec=>"rubydebug" and that isn't helping me.

Sample line from /var/log/logstash-plain.log

[2017-04-27T16:30:02,512][DEBUG][logstash.pipeline ] output received {"event"=>{"collection_date"=>2017-04-27T20:00:00.000Z, "system_date"=>2017-04-27T20:28:00.000Z, "ne_name"=>"YXCO41_FELDZ77-01", "ne_model"=>"Z77", "finished"=>1, "type"=>"CollectorStatus", "collector_interval"=>900, "active_hdl"=>"19b", "tags"=>["NetOptimizeCollectorStatus", "cyan", "_dateparsefailure"], "collection_status"=>"Finished", "@timestamp"=>2017-04-27T20:30:02.502Z, "abort"=>0, "collector_name"=>"YXCO41_FELDZ77-01", "@version"=>"1", "rpu_hostname"=>"sapl19", "ne_vendor"=>"Cyan"}}

Sample line in the rubydebug output.

{
"collection_date" => 2017-04-27T20:45:00.000Z,
"system_date" => 2017-04-27T21:12:00.000Z,
"ne_name" => "TRHLPAXTO02",
"ne_model" => "cyan-Z33",
"finished" => 1,
"type" => "CollectorStatus",
"collector_interval" => 900,
"active_hdl" => "20a",
"tags" => [
[0] "NetOptimizeCollectorStatus",
[1] "cyan",
[2] "_dateparsefailure"
],
"collection_status" => "Finished",
"@timestamp" => 2017-04-27T21:15:01.796Z,
"abort" => 0,
"collector_name" => "TRHLPAXTO02",
"@version" => "1",
"rpu_hostname" => "sapl20",
"ne_vendor" => "Cyan"
}

I also searched previous post and found the online tester site https://joda-time-parse-debugger.herokuapp.com/
My date seems to work with the pattern just fine.

I'm using the JDBC input for logstash, so is maybe the field "system_date" not getting into the Date parser?

I had multiple conf files before but condensed it to just one for trying to solve this. Below is my ONLY conf file for logstash now. (Note I did alter some fields in JDBC since this is public forum.)

The # character at the beginning of a line indicates a comment.

Use comments to describe your configuration.

input {
jdbc {
jdbc_connection_string => "jdbc:oracle:thin:@//fwsd01:1531/PROD"
jdbc_driver_library => "/usr/share/logstash/jdbc_drivers/oracle-jdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_password => "xxxxxxxx"
jdbc_user => "xxxxxxxx"
jdbc_validate_connection => "true"
#parameters => { "table" => "timezone" }
schedule => "*/15 * * * *"
#sql_log_level => "debug"
statement_filepath => "/usr/share/logstash/DB_queries/c_status.sql"
record_last_run => "false"
tags => ["valid","NetOptimizeCollectorStatus","cyan"]
type => "CollectorStatus"
}
}

The filter part of this file is commented out to indicate that it is optional.

filter {
if "CollectorStatus" == [type] {
date {
match => ["system_date","YYYY-MM-dd'T'HH:mm:ss.SSS'Z'","ISO8601"]
timezone => "Etc/UTC"
target => "@timestamp"
}
}
if "valid" not in [tags] {
drop { }
}
mutate {
remove_tag => [ "valid" ]
}
}
output {
if "CollectorStatus" == [type] {
elasticsearch {
hosts => ["10.112.91.113:9200"]
id => "collector_status"
index => "collector_status_%{+YYYY.MM.dd}"
}
}
file { path => "/tmp/logstash/stdout_%{+YYYY.MM.dd}"
codec => "rubydebug"
flush_interval => 0
}
}

Is there anything special I need to put into the conf or logstash startup to get more debug output from the date parser?

No, it logs that at the default log level.

"system_date" => 2017-04-27T21:12:00.000Z,

Aha! The field isn't a string but already is a timestamp, and that's why the date filter fails. That's arguably a bug (I've filed Date filter fails to parse timestamps · Issue #95 · logstash-plugins/logstash-filter-date · GitHub) but there are a couple of workarounds that you can try:

  • In your SQL query, typecast the timestamp as a string.
  • Use a mutate filter's convert option to typecast the field to a string prior to the date filter.
  • Use a mutate filter to copy the timestamp into @timestamp and overwrite the existing value (use the replace option).

Thanks very much magnus,
Based on your recommendations here is my final solution. I put this solution in case anyone else runs into similar issue.

I changed my filter (based on my type) to following. Adding a field in mutate created a string variable. I used this in the data match. Then it gets removed if the match worked. Everything worked great.

if "CollectorStatus" == [type] {
mutate {
add_field => {"temp_ts" => "%{system_date}"}
}
date {
match => ["temp_ts","ISO8601"]
remove_field => ["temp_ts"]
}
}

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.