I have json logs wich have a field timestamp in iso 8601 format I want to convert it to UNIX_MS so I can calculate the response time between step 1 and 2, 1 and 3 for each mid.
You want something very much like example 1 in the aggregate documentation.
Although the logstash Timestamp object supports nanosecond precision, the date filter does not. However, the json filter does, so we have to rename the timestamp field to @timestamp to get the json filter to use it to set [@timestamp].
Thank you so much Badger, can you please explain this code, I found in the documentation that gsub filter match string field and replace them with the last value . Is the message will be removed ?
And in the aggregation we have event.get("@timestamp").to_f'
Is the "to_f" convert the timstamp to millisecond ?
The [message] field is modified by the gsub, not removed. The curly quotes are changed to straight quotes, and the string timestamp is replaced with @timestamp.
The .to_f converts the LogStash::Timestamp object to a floating point number. The source JSON has microsecond precision so the floating point number will have microsecond precision. But, as always with floating point numbers, the accuracy of the digital representation is not exact. The difference between 06.124890 and 07.124897 may not be 1.000007, it might be 1.00000699827
Your [message] field contains JSON. That will parse it so that you have [log_level], [timestamp], [event_type], and [mid] fields.
If it is successfully parsed then the [message] field will be deleted, so that you do not have duplicate data. If the parsing fails then the [messsage] field will remain so that you can see what is wrong with it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.