Filebeat new timestamp field

Hey,

I'm pushing logs from my application into Elasticsearch using Filebeat agent installed on my app's machine. Logs that my application generates contain a field called timestamp that contains the datetime string defining when the log line was generated. It is formatted like this Sun, 29 Apr 2018 16:19:45:825 GMT.

In Kibana, when creating the index pattern, I can only select the default @timestamp field to be used as the time of the events. But this field is generated (I assume) by filebeat as a value when given log line was read from the log file. This is problematic for me, coz it creates a time discrepancy in the logs that I view in Kibana, and some log messages are displayed in incorrect order.

Is there a way to tell filebeat to read the value of @timestamp from my log files, rather than setting this value to the time when logs are uploaded? Or is there a way to define new field real_timestamp can I could use as a time field when configuring my index pattern?

I've been going through filebeat documentation, but wasn't able to find a way to do it.

Thanks in advance for your help!

When using elasticsearch as output, you can define ingest processor pipelines and use them to parse logs sent by filebeat.

You can find some example pipelines that do something similar to what you need in the filebeat modules themselves, for example the kafka module in its pipeline replaces the filebeat @timestamp field with the result of parsing the kafka.log.timestamp field as a date.

Great, thanks very much, this worked perfectly!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.