@timestamp in log file getting replaced by processing time

I recently switched over from using Logstash as my shipper to filebeat. The flow is Filebeat -> Logstash -> ES -> Kibana 4. Previously, it was LS Shipper -> Redis -> LS -> ES -> K

Log files themselves are json and contain a field called "@timestamp," which is the desired event timestamp. The logs are being read, and parsing of the json is working: I can see my fields in Kibana. However, my @timestamp field is getting replaced by Logstash’s processing time. If I rename @timestamp in the log file itself, e.g. "foo", it shows up properly.

What am I doing wrong that is causing Logstash to replace the incoming @timestamp with its own date? I do not see any errors being generated (and like I said, renaming the field works fine).

Thanks!

Can you share your LS config? Are reading in JSON directly on the filebeat side?

Hi, yes, I am reading JSON directly, using input_type: log

My LS config has a pretty extensive filter block, but extremely simple on the input side. (Note that I was previously using LS Shipper with this same config. The only difference was using Redis input instead of beats.)

input {
beats {
port => 5044
codec => "json"
}
}

I looked at my filebeat log and see:
81 2016-06-29T03:03:17-04:00 DBG Publish: {
82 "@timestamp": "2016-06-29T07:03:15.086Z",
83 "beat": {
84 "hostname": "host1",
85 "name": "host1"
86 },
87 "count": 1,
88 "fields": null,
89 "input_type": "log",
90 "message": "{[BIG SNIP]"@timestamp":"2015-07-01T05:00:15.242Z",[BIG SNIP]}",
91 "offset": 0,
92 "source": "/opt/apache-tomcat/logs/test.log",
93 "tags": [
94 "host1"
95 ],
96 "type": "foo_type"
97 }

Sorry about having to redact a bunch of data. While I don't claim to understand how Filebeat works, it appears that it is generating a @timestamp of its own (which is reasonable since my understanding is that all it can do is read line by line and more or less blindly send it on (in my case to LS). You can see that the JSON body contains a different @timestamp. If I rename the field in my test.log, it makes it through to Kibana. However if I leave it this way, the Filebeat generated timestamp, or perhaps a LS generated timestamp "wins" and is the value stored in ES and displayed as the event time in Kibana.

My next idea (I haven't done it) was to remove the "json" codec from the LS input block. I would expect to see the @timestamp preserved in the message instead of getting parsed and overwritten (though obviously the @timestamp would be incorrect.) This is mostly a sanity check. FYI, the timestamp in Kibana matches the timestamp in the Filebeat log. I don't know whether LS generated the same exact timestamp or if you'd expect it to be fractionally later.

Thanks for your help.

I think you are actually hitting this bug here? https://github.com/logstash-plugins/logstash-input-beats/issues/33

With 5.0.0-alpha3 you could also read json directly in filebeat: https://www.elastic.co/guide/en/beats/filebeat/master/configuration-filebeat-options.html#config-json

You are exactly right. I'm embarrassed I didn't find this myself. I'm glad we haven't made this change in production as I can't easily change the log format. I'm concerned that this thread is from seven months ago. :frowning:

Parsing JSON from Filebeat probably doesn't do me much good since I need to output to LS for filtering.

I guess I will be reverting back to LS for shipping for now. Thank you @ruflin

Update: So a workaround here appears to be changing the input block to read Filebeat output as "plain" (just delete the codec line) and then from LS, use the json filter, setting the source as the message.

@ttl One more note about filtering. In case you only need to reduce the fields send, filebeat-5.0.0 also supports filtering.

This topic was automatically closed after 21 days. New replies are no longer allowed.