Handlng microsecond timestamps


(Bud B) #1

Elasticseach search's native timestamp can currently only handle times with millisecond resolution and I know there are people out there looking to store timestamps with as least microsecond (usec) resolution.

My workaround for this was to parse the timestamp from my log files into a 'usec since epoch' field using ruby:

    # Assumes a field 'log_datetime' formatted as 'yyyy-MM-dd HH:mm:ss.uuuuuu'
    ruby {
        code => "
            t = Time.parse(event['log_datetime'], '%Y-%m-%d %H:%M:%S.%L')
            event['timestamp_us'] = t.to_i * 1000000 + t.usec
       "
    }

I can then use this field for sorting where there are mutliple events within a millisecond. If anyone has a better solution for handling this, I'd love to hear it.

Note: In Kibana, by default, the timestamp_us field will display as something like 1,451,944,998,000,000. If you want to get rid of the commas, under the settings->indicies tab, you can edit the display format -- just enter a 0 (zero) as the format.

-- Bud


(system) #2