I would suggest a json filter to parse the JSON, then mutate+add_field to combine the date and time fields using sprintf references, then a date filter. There are many, many examples of each of those in this forum.
@Badger Many thanks to you, I wasn't searching in the right place.
I didn't know parsing was a first step, I thought it was done automatically.
I did try this example but wasn't sure about the syntex
if "GW" in [path] {
mutate { add_field => { "[@metadata][ts]" => "%{date} %{time}" } }
date { match => [ "[@metadata][ts]", "%{DATE_US:date} hh:mm:ss a" ] }
}
The [@metadata] field contains fields that are visible in the pipeline, but are ignored by the outputs, so it is useful to store interim results whilst processing events.
Can you show me how it would be done so I can revers-engineer it in order to understand?
{"type": "GreatLog", "date": "10/3/2021", "time": "6:21:35 AM", "message": "Take a Measurement ", "data": "<94;1;0;0;97;168;19;136;0;52;(0)><134217728>", "clientntId": "959", "ddid": "D9-9999-004F"}
Also, the @timestamp field will always be in UTC, if your original date is not in UTC you will need to use the option timezone in the date filter to tell logstash the timezone of your date.
This means that you have two fields, @metadata.ts and @metadata.anything, when using those fields in logstash filter will need to use the square brackets to access them, like [@metadata][ts] and [@metadata][anything].
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.