I recently switched over from using Logstash as my shipper to filebeat. The flow is Filebeat -> Logstash -> ES -> Kibana 4. Previously, it was LS Shipper -> Redis -> LS -> ES -> K
Log files themselves are json and contain a field called "@timestamp," which is the desired event timestamp. The logs are being read, and parsing of the json is working: I can see my fields in Kibana. However, my @timestamp field is getting replaced by Logstash’s processing time. If I rename @timestamp in the log file itself, e.g. "foo", it shows up properly.
What am I doing wrong that is causing Logstash to replace the incoming @timestamp with its own date? I do not see any errors being generated (and like I said, renaming the field works fine).
Hi, yes, I am reading JSON directly, using input_type: log
My LS config has a pretty extensive filter block, but extremely simple on the input side. (Note that I was previously using LS Shipper with this same config. The only difference was using Redis input instead of beats.)
Sorry about having to redact a bunch of data. While I don't claim to understand how Filebeat works, it appears that it is generating a @timestamp of its own (which is reasonable since my understanding is that all it can do is read line by line and more or less blindly send it on (in my case to LS). You can see that the JSON body contains a different @timestamp. If I rename the field in my test.log, it makes it through to Kibana. However if I leave it this way, the Filebeat generated timestamp, or perhaps a LS generated timestamp "wins" and is the value stored in ES and displayed as the event time in Kibana.
My next idea (I haven't done it) was to remove the "json" codec from the LS input block. I would expect to see the @timestamp preserved in the message instead of getting parsed and overwritten (though obviously the @timestamp would be incorrect.) This is mostly a sanity check. FYI, the timestamp in Kibana matches the timestamp in the Filebeat log. I don't know whether LS generated the same exact timestamp or if you'd expect it to be fractionally later.
You are exactly right. I'm embarrassed I didn't find this myself. I'm glad we haven't made this change in production as I can't easily change the log format. I'm concerned that this thread is from seven months ago.
Parsing JSON from Filebeat probably doesn't do me much good since I need to output to LS for filtering.
I guess I will be reverting back to LS for shipping for now. Thank you @ruflin
Update: So a workaround here appears to be changing the input block to read Filebeat output as "plain" (just delete the codec line) and then from LS, use the json filter, setting the source as the message.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.