The setup is this filebeat (ws2012r2) 5.4.0 -> logstash (rhel7) 5.4.0 -> elasticsearch (rhel7) 5.4.0 and the presentation layer is kibana (rhel7) 5.4.0.
This is a sample line from the logs:
2017-04-19 15:25:40,378 [80] DEBUG [something] [(null)] - Message was handled without exception 974cc6f7dd91407dbe435439058cda27be4
The resulting @timestamp field is this: @timestamp:May 30th 2017, 10:35:11.786
And here are the relevant bits of the logstash config:
The grok filter triggers because the "TimestampTest" shows up in ES but @timestamp is not replaced, I've checked the logs but there's nothing interesting in them.
%{TIMESTAMP_ISO8601} doesn't capture the timestamp into a field. You need %{TIMESTAMP_ISO8601:@timestamp}, but I'm not sure such a direct assignment will work. One typically extracts the timestamp to a temporary field and use the date filter to process it.
Yes, as I suspected you can't capture straight into @timestamp. Keep your grok filter but capture the timestamp to a different field than @timestamp. Use that field in your date filter and a use date pattern that matches your timestamp, like "ISO8601".
With no reference to what line contains the offending clause I can only guess that the match in the date block is the problem so I tried a blunt regex like so:
match => [ "TimestampTemp", "\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}i,\d{3}" ]
That did remove the diffuse error message from the logs but it did nothing in terms of replacing the @timestamp.
As it turns out the problem is that TimestampTemp is never overwritten, checking the documentation for grok -> overwrite claims that one needs a match clause then overwrite like the one in the grok block above.
At least it seems I've found the issue but unfortunately that doesn't bring me any closer to a solution.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.