Date filter vs. accepting @timestamp as-is

Hi,

This is something I've been curious about - I know that using the date filter you can basically replace what's in @timestamp with the actual date of the log entry itself. But is there a downside to just not using a date filter and accepting the couple milliseconds (in my environment) difference between the log date and when the document was indexed by elasticsearch? Is there anything that I lose out on by doing this? The convenience of having all my @timestamp's in UTC and Kibana auto-adjusting based on my browser timezone seems way better than making sure all of my devices are logging in UTC and having to account for the various and sundry date formats that they may (or may not) emit logs in.

But is there a downside to just not using a date filter and accepting the couple milliseconds (in my environment) difference between the log date and when the document was indexed by elasticsearch?

For this part, I could see an issue where there is a backup in your pipeline that causes events to be spooled/queued up. In that case there could be time differences that are longer than a few ms.

Also, consider if the logstash machine fails for a few hours. Now the timestamps in your index and the timestamps in your logs are going to be very different.

Out of curiosity, why do you ask this? Are you having trouble setting up the date filter?

1 Like

Mostly because I'm lazy. :slight_smile: You've got valid points though. Appreciate the response.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.