I have the following filter in my logstash pipeline:
date {
match => [ "[fields][serverrequest][starttime]", "UNIX_MS" ]
tag_on_failure => [ "serverrequest_logstyle_dateparsefailure" ]
}
However, the resulting document in ElasticSearch is showing a timestamp value which does not match the parsed field
e.g. document:
@timestamp: Jan 27, 2021 @ 11:11:14.891
fields.serverrequest.starttime: 1611722342306
Parsing that starttime on epochconverter.com gives me Wednesday, 27 January 2021 04:39:02.306 which is what I expect from the other document details, and also simply looking at the last 3 digits in the starttime field.
I can't see any backlog on the filebeat / logstash monitoring that would explain a ~7 hour delay, and I don't see any documents with a timestamp in the future.
That creates a field called fields.serverrequest.starttime with two periods in the name. If you want a starttime field inside the serverrequest object inside the fields object you would have to use %{NUMBER:[fields][serverrequest][starttime]}.
kibana and elasticsearch use the same name for both things, logstash does not. There was a time (around 5.x if I recall correctly) when elasticsearch disallowed periods in field names, but then later on they were allowed again.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.