I want to use the timestamp for the logs found in the message. Why do I keep getting the _dateparsefailure ? The grok pattern works and logTimestamp gets dumped into kibana as a string.
Hi. I've tried this as well but am still having the issue where the match string now matches the Time attribute rather than the log timestamp. My current config looks like so:
input {
kafka {
bootstrap_servers => "kafka02.company.net:9093"
topics => ["Capsule_logs"]
}
}
With Logstash logging set to debug. In the logs what I see is:
[2018-02-13T00:16:11,475][DEBUG][logstash.pipeline ] output received {"event"=>{"tags"=>["_dateparsefailure"], "message"=>"{"@timestamp":"2018-02-13T00:16:07.784Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.0","topic":"Capsule_logs"},"source":"/data_0/logs/company.com/postgresql463/postgresql463.log","offset":8754897,"message":"Feb 12 23:25:22 company.com postgresql463: postgres.24 | Updating the TTL for primary.","tags":["postgresql"],"prospector":{"type":"log"},"beat":{"name":"syslog.internal","hostname":"syslog.internal","version":"6.1.0"}}", "@version"=>"1", "@timestamp"=>2018-02-13T00:16:10.933Z, "logTimestamp"=>"Feb 12 23:25:22"}}
The logTimestamp is being set but the event is being tagged with a dateparsefailure. I don't see anything specifically noting why this is happening, just that it is happening.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.