Date Match is setting incorrect timestamp / not actually setting timestamp

I have the following filter in my logstash pipeline:

    date {
      match => [ "[fields][serverrequest][starttime]", "UNIX_MS" ]
      tag_on_failure => [ "serverrequest_logstyle_dateparsefailure" ]
    }

However, the resulting document in ElasticSearch is showing a timestamp value which does not match the parsed field

e.g. document:

    @timestamp: Jan 27, 2021 @ 11:11:14.891
    fields.serverrequest.starttime: 1611722342306

Parsing that starttime on epochconverter.com gives me Wednesday, 27 January 2021 04:39:02.306 which is what I expect from the other document details, and also simply looking at the last 3 digits in the starttime field.

I can't see any backlog on the filebeat / logstash monitoring that would explain a ~7 hour delay, and I don't see any documents with a timestamp in the future.

I'm at a loss to explain what's going on.

Gah, foolishly I was using the wrong syntax to access a nested field. I should have used "fields.serverrequest.starttime"

No, you used the right syntax to access a nested field, I guess what you have is a field with periods in its name, which is very different.

Huh.. Have I done something silly in my Grok which is using field names with periods instead of nested fields then? Here's the full section

grok {

      match => { "message" => "^%{NUMBER:fields.serverrequest.starttime}\|%{NUMBER:fields.serverrequest.responsetime:int}\|%{NUMBER:fields.serverrequest.sqltime:int}\|%{NUMBER:fields.serverrequest.querycount:int}\|%{NUMBER:fields.sql.numberofrowsread:int}\|%{NUMBER:fields.sql.resultsetopentime:int}\|%{NUMBER:fields.serverrequest.beginfreememory}\|%{NUMBER:fields.serverrequest.endfreememory}\|%{NUMBER:fields.serverrequest.totalmemory}\|%{NUMBER:fields.serverrequest.wftime:int}\|%{NUMBER:fields.serverrequest.events:int}\|%{NUMBER:fields.serverrequest.usedjdbcconnections:int}\|%{NUMBER:fields.serverrequest.available:int}\|(%{IP:fields.client.ip})?\|(%{DATA:fields.client.username}/%{DATA:fields.client.process})?\|%{NUMBER:fields.requestid:int}\|(?<fields.servername>.*?)\|%{GREEDYDATA:fields.methodcall}$"}

      tag_on_failure => [ "serverrequest_logstyle_grokparsefailure" ]

    }

    mutate {

      remove_field => [ "message" ]

      convert => { "fields.serverrequest.executetime" => "integer" }

    }

    date {

      match => [ "fields.serverrequest.starttime", "UNIX_MS" ]

      tag_on_failure => [ "serverrequest_logstyle_dateparsefailure" ]

    }

That creates a field called fields.serverrequest.starttime with two periods in the name. If you want a starttime field inside the serverrequest object inside the fields object you would have to use %{NUMBER:[fields][serverrequest][starttime]}.

kibana and elasticsearch use the same name for both things, logstash does not. There was a time (around 5.x if I recall correctly) when elasticsearch disallowed periods in field names, but then later on they were allowed again.

Understood, thanks very much for clarifying.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.