Hi,
I'm encounter an issue with the logstash json plugin.
In my "message", there's an field called "@timestamp", it's in UNIX epoch float format. However, when I try to use json filter to parse the "message" field, it converts the field to _@timestamp, with value of scientific notation string.
Is there a date match format that I can use out of box to parse the scientific notation? Instead of writing my own ruby code to recover the original float timestamp?
Is there a way to suppress overwhelmed WARN logs from json filter, such as
Unrecognized @timestamp value, setting current time to @timestamp, original in _@timestamp field {:value=>"0.1592150846569201e10"}
I haven’t tried this but base on this, looks like the UNIX format will parse float epoch
you can set the json target to another field then use a date filter to convert the timestamp. that way your filter won’t try to override the timestamp for every entry hence avoiding the warning message . but depending on the amount of fields in the json, you could end-up having a lot of fields under the target
Thank you very much for your prompt reply #1, I tried to use the following to parse the scientific notation string, but I got a timestampparsefaile.
date {
match => ["_@timestamp", UNIX]
}
It seems that it's good at parsing UNIX float, but NOT the scientific notation format
#2. I thought about that, but the issue is that I don't want all the fields out of json ended up being in an nested json object. To comply with our data model, I had to write another ruby script to move each of them out of the nested object to the root level, which is quite inefficient.
Thank you @Badger, I implemented a hack before viewing your post above:
Given the message in my use case are usually short, I used gsub replaced the "@timestamp" in the "message" string to another key before passing the the "message" content to the json filter. It works.
mutate {
# Hack, rename the @timestamp field embedded in the message to another string to avoid conflict when parsing the json
gsub => ["message", "\"@timestamp\"", '"logCreateTime"']
}
json {
# convert the document string in message to json obj
source => "message"
# remove the message (json string) once it's converted to json obj
remove_field => [ "message" ]
}
From your experience, will this be a cheaper or more expensive operation than moving the keys?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.