Hello everyone. Currently, I'm in the stage of writing custom Grok filters in Logstash. I have many different logs - some of them have a date format that fits ISO8601 format. But unfortunately when I use this format in my first grok filter in Logstash sometimes I get the date parsed not from the beginning of the file but from the message field (for example 11-02-1969 or whatever fits this format).
To have full control over what date I want to get from the log I'm using regex with %{YEAR} (day, time, etc.) ready patterns and also I use custom-defined patterns like (?<field_name>\w{24}\s).
The thing is that when I create those fields they are not sortable in Kibana they are Text format not date. Is there any way to make those fields DATE format so they can be recognized and sorted in Kibana column similar to @timestamp?
Or anybody have other ideas?
Regards
Sorry.
grok {
match => { "message" => "^[%{MONTHNUM:m}\s*/%{MONTHDAY:d}\s*/%{YEAR:y}\s*%{TIME:t}\s*%{WORD:tz}]" }
add_field => { "real.timestamp" => "%{d}/%{m}/%{y} %{t} %{tz}" }
}
How to convert real.timestamp to DATE type field ?
When I use
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:real.timestamp}" }
}
For example in this message [11/13/23 11:12:23:923 CET] some text value=1977-11-21 01:22:11.0
it takes not the date from the beginning of the new line in the file but from inside message
But how to do it ? I need to keep for example format day/month/year hour:minutes:seconds:miliseconds
My logs comes from many systems I need to unify date format in kibana across whole system.
If you like [real][timestamp] as the date type, you have to convert. Also you have to delete data view and index or do reindex. By the way, I have corrected the time format to HH:mm:ss:SSS
date {
match => ["date", "MM/dd/yy HH:mm:ss:SSS"]
timezone => "CET" # or %{tz}
target => "[real][timestamp]"
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.