Overwrite @timestamp with log timestamp

As the title says, I'm looking to overwrite ingest timestamp to reflect the actual timestamp pulled from log data. I've already got a custom regex that can pull this timestamp out of my logs as its own unique field, but want to be able to sort logs by the metadata timestamp in kibana.

filter { 
if [fields][index] == "mylogs" {
mutate { add_field => { injest_timestamp => "%{@timestamp}" }
}
grok {
pattern_definitions => { "MYLOG_TIMESTAMP" => "\[([0-9]{2}.){6}[0-9]{3}.[0-9]{2}\]" }
break_on_match => false
match => [ "message", "%{MYLOG_TIMESTAMP:timestamp} %{GREEDYDATA:Message}" ] }
date { match => [ "timestamp", "dd/MM/yy HH:mm:ss.SSS" ] }
}
}

Now, from my research I feel I should just replace "timestamp" with [@metadata][timestamp] to overwrite but it doesn't want to work. Instead, the timestamp displayed in kibana continues to display the timestamp the logs were injested. The above grok filter works just fine as it correctly displays the logs timestamp via the pattern_definitions field.

any help would be greatly appreciated.

There is no way that date filter can match whatever that grok pattern matches.

The grok pattern matches

  1. Left square bracket
  2. 6 occurrences of two digits followed by any character
  3. Three digits
  4. One occurrence of any character
  5. Two digits
  6. A right square bracket

Your date pattern would match 2 and 3 from that list, but not the rest. The pattern must match the entire contents of the field.

Can you tell me where this breaks down?

For example, my grok pattern will extract the following timestamp from the original logs:
[10/06/21 09:33:32.777.00]

Is it the square brackets that cause date match to break down? Trying to find documentation specifically on date match and date target is leading me down many dead ended rabbit holes. Any help is appreciated

The date pattern has to match the entire contents of the field. What does the .00 at the end of the date mean?

Far as I can tell its just an extreme microsecond. Max that the date filter can work with is a thousandths of a second. From the documentation my understanding is that it should match and would just append zeros

This is a terrible timestamp to try to work with on an old service, but I've no control over the format unfortunately. Ultimately, I just want to be able to sort data in kibana by the timestamp associated with a log entry, not the ingest time that logs were parsed by filebeat / logstash as is default.

I would suggest match => [ "timestamp", "[MM/dd/yy HH:mm:ss.SSS.SS]" ]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.