You want something very much like example 1 in the aggregate documentation.
Although the logstash Timestamp object supports nanosecond precision, the date filter does not. However, the json filter does, so we have to rename the timestamp field to @timestamp to get the json filter to use it to set [@timestamp].
The [message] field is modified by the gsub, not removed. The curly quotes are changed to straight quotes, and the string timestamp is replaced with @timestamp.
The .to_f converts the LogStash::Timestamp object to a floating point number. The source JSON has microsecond precision so the floating point number will have microsecond precision. But, as always with floating point numbers, the accuracy of the digital representation is not exact. The difference between 06.124890 and 07.124897 may not be 1.000007, it might be 1.00000699827
Your [message] field contains JSON. That will parse it so that you have [log_level], [timestamp], [event_type], and [mid] fields.
If it is successfully parsed then the [message] field will be deleted, so that you do not have duplicate data. If the parsing fails then the [messsage] field will remain so that you can see what is wrong with it.