Pass full original timestamp from Zeek log to Elasticsearch

I have an Elastic cluster setup with a server hosting Zeek and Filebeat. We are manually running PCAP files through Zeek, Filebeat is picking up the logs, and data is being parsed/indexed by Elastic and we can see all the Zeek log data. I would like to see if anyone has been able to get Filebeat to pass along the full timestamp field from the Zeek log to Elastic. Currently, it cuts off part of the timestamp and only goes to the millisecond, but I would like it to keep all the way to the nano second. I've tried changing the fields.yml file to say the date field is type date_nanos, but that did not work. I've also tried to create a new field and pass it the zeek.conn.ts field, which is the "ts" field from the Zeek connection log and it creates the new field in Elastic, but it is still the same truncated timestamp with no nano seconds. It's almost as though Filebeat cannot handle date/time that goes to the nano second.

Hi Matthew,

We have a log ongoing work around nanosecond.
For beats and logstash


One of the PRs adds a fraction of second formatting possibility for time formatting:

    //  S       fraction of second           nanoseconds    yes     978000
    //  f       fraction of seconds          nanoseconds    yes     123456789

Should be merged in 7.7.0.
Would you mind to share, how exactly does your field from the log file look like?

Hi Andre

Here is a snipit of a line from our Zeek Connection log. It shows the full timestamp. We are using the default config and Elastic pipelines that come with Filebeat.

{"ts":1548711854.189404,"uid":"CyrSdJ3kqGRU2bOr04","id.orig_h":"172.17.8.109","id.orig_p":49157,"id.resp_h":"172.17.8.2","id.resp_p":88,"proto":"tcp","service":"krb_tcp","duration":0.0032958984375,"orig_bytes":248,"resp_bytes":304,"conn_state":"RSTR","missed_bytes":0,"history":"ShADdFar","orig_pkts":4,"orig_ip_bytes":420,"resp_pkts":4,"resp_ip_bytes":476}

Ok, this was helpful, and these are not nanoseconds

https://en.wikipedia.org/wiki/Nanosecond  10^-9 s   java format SSSSSSSSS
https://en.wikipedia.org/wiki/Microsecond 10^-6 s   java format SSSSSS
https://en.wikipedia.org/wiki/Millisecond 10^-3 s   java format SSS

So this is more what you have

1548711854.189404
Epoch sec   ms µs

These are µs actually.

Normally date timestamp conversion can be done in ingest pipelines in elasticsearch or in logstash using date filter.

Here an example of how to convert the timestamp in ingest pipeline:

GET /_ingest/pipeline/_simulate
{
  "pipeline" :
  {
    "description": "_description",
    "processors": [
      {
        "date" : {
          "field" : "timestamp",
          "target_field": "convertedtimestamp",
          "formats": ["UNIX"]
        }
      }
    ]
  },
  "docs": [
    {
      "_index": "testindex",
      "_id": "id",
      "_source": {
        "timestamp": "1548711854.189404",
        "@timestamp": "2020-05-27T11:37:25.559Z"
      }
    }
  ]
}

The converted timestamp is then in field convertedtimestamp:
Result would be like:

...
       "_source" : {
          "convertedtimestamp" : "2016-03-19T09:07:21.909Z",
          "@timestamp" : "2020-05-27T11:37:25.559Z",
          "timestamp" : "1458378441.909527"
        },

...

If you like you can transfer the timestamp into @timestamp field as well.

Hope that helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.