Nanoseconds with Logstash and Elasticsearch

Good Morning! I am a little stuck figuring out how to handle custom time formats with nanosecond precision. The data that I am currently trying to ingest has two different time formats: 1629480840.652062565 and 2021:08:20:17:34:00:734116725. I'm using Dissect in Logstash to set the field names and assigning the data types and formats in my index mappings but am having a really hard time wrapping my head around the correct way to preserve the nanosecond precision.

For the epoch time string, I seem to be stuck between either epoch_seconds and epoch_millis, neither of which provide the required precision. For the custom format, I have the format set as "yyyy:MM:dd:HH:mm:ss:nnnnnnnnn" but can only get millisecond precision.

Can anyone point me in the right direction for this?

Thanks so much!

Check out this post:

Thank you for the reply! I should have clarified that I tried that solution but my challenge is that my timestamps are not in his nanosecond number format. I guess I need to find a way to convert them from their existing format into UNIX_NS first and go from there.

As a follow-up to this, another problem I'm having is that logstash is truncating the epoch timestamp "1629480840.652062565" to only three decimal points, regardless of the field type that I set on the index. This seems to be happening prior to the Ruby filter which converts it to UNIX_NS time because the new nanoseconds field only has 3 decimals as well.

I also tried removing the decimal and using a time string of "1629480840652062565" but the nanoseconds are still truncated to 3 decimals.

Nothing in logstash supports nanosecond precision. Only elasticsearch supports that.

Perhaps I'm misunderstanding the workaround posted above then. Isn't the ruby filter that converts the timestamp from a number to the date string format being done as a logstash filter?

If you look at the output in the linked post you see

"@timestamp_nanoseconds" => "2009-02-13T23:31:30.123456789Z",
            "@timestamp" => 2009-02-13T23:31:30.123Z,

The second one is a LogStash::Timestamp, and only has millisecond precision. The first is a string, which you can tell because it is surrounded by quotes.

If you have an index mapping that sets the type of the first one to "date_nanos" then when the string arrives in elasticsearch it will get parsed with nanosecond precision. But that is all in elasticsearch, not in logstash.

Ah I think that's what I missing. I was setting the nanoseconds field format to date_nanos not it's type. Thanks so much!