Hi! I'm using Filebeat 6.3.0 and Docker 18.0.05-ce. Docker stores timestamps with nanoseconds, but elasticsearch date format precision is a millisecond.
Looks like this issue never be fixed.
Is there any elegant solutions nowadays? Maybe it will be OK to use string type in index mappings and just sort lexicographically? In this case, how to retrieve this field from docker log-json files?
Have you considered to sort by offset as well. According to the docs, it is the file offset. Problem is with file toration. Using ingest node or Logstash, one could try combine timestamp and offset into a 'sortable' number of type 'long'.
Thanks for response! I know I can use offset for this, but in my setup I want log files to rotate often (there's some space limitations), so this doesn't look perfect. Is there a way to configure docker prospector to pass that precise time string (provided by json-log) in a separate field?
When high precision is required the offset trick can work. But not always.
In a multi threaded application the log buffering may result in log events entering the log slightly un-ordered. But with a high precision timestamp this is not an issue.
I second that this be brought up as an enhancement moving forward.
Hi, i also have this problem when i use Filebeat collect docker logs.
Because my project urgent previously and i unfamiliar beats code, so i fork https://github.com/elastic/beats and change Filebeat @timestamp precision to nanoseconds, but i know it not the best way to solved it.
Now i want to know what progress of this problem. Thanks.
Hi! Thanks for contribution! Let's open a pull request. Though elasticsearch doesn't support it, this feature will allow us to store precise timestamp in some other text field.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.