If the parsed data contains a @timestamp field, we will try to use it for the event’s @timestamp, if the parsing fails, the field will be renamed to _@timestamp and the event will be tagged with a _timestampparsefailure.
...but I can't find any info on how they try to parse it or how to affect the parsing.
I'm feeding in a record with @timestamp set to a valid epoch time - and it's failing to parse it:
"@timestamp":"1522458058"
Now, it's about 3 months old (old test data), but it is the correct timestamp for the event - 2018-03-31 01.00.58 am - so why is it getting rejected?
Is there anyway of disabling the timestamp parsing? I'm trying to feed logstash data it can simply ingest without it needing to mess around with each record after it's received it.
Does it work if you parse _@timestamp with a date filter afterwards? I don't know if the json filter is super clever at detecting the date format.
Something like this should work:
if "_timestampparsefailure" in [tags] {
date {
match => [ "_@timestamp", "UNIX" ]
remove_field => "_@timestamp"
remove_tag => "_timestampparsefailure"
}
}
If the question is just about disabling the parsing (and not caring about the correct timestamp for the event), I don't know, and I don't see a way in the code to do that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.