The issue that I am facing is that I need to manually push some application logs into ES, through Logstash. I am using a separate instance of Logstash to do so, but the place where I am stuck is related to the timestamp of the logs when they are visualized in Kibana. It shows the timestamp when the logs are pushed into ES (since Kibana uses @timestamp field), rather than the timestamp at which the logs were generated. Some googling led me to know that the @timestamp field is populated by Logstash when it pushes the logs to ES index. Thus, there are currently 2 apparent solutions for this (searched through Google :P) :
Extract the timestamp from the logs (through a GROK filter) and then put the value into the @timestamp field (Tried several methods, none of them worked as expected)
Extract the timestamp from the logs, put them in a custom field (e.g. logtimestamp) and then push them into ES. Then configure Kibana to order the logs on the basis of the logtimestamp field, instead of the default @timestamp field (not possible due to several reasons)
Thus currently stuck in this. Any and all help will be appreciated. Plus, I was a bit confused where to post this (whether Elasticsearch or Logstash). Thus, if this is not the right place to post, let me know. Thanks!
Well they are of different formats, but generally I can say that they have the syslog format as follows:
Aug 13 06:31:36 fp-app1 systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 13 06:31:39 fp-app1 systemd[1]: Started Daily apt upgrade and clean activities.
Aug 13 08:42:45 fp-app1 systemd[1]: Starting Cleanup of Temporary Directories...
Aug 13 08:42:45 fp-app1 systemd[1]: Started Cleanup of Temporary Directories.
Aug 13 16:55:36 fp-app1 systemd[1]: Starting Daily apt download activities...
Aug 13 16:55:37 fp-app1 systemd[1]: Started Daily apt download activities.
And I appreciate you replying in swift manner! But if you can guide me how to modify the @timestamp field, I'll be more than happy to try out the method and let you know of the outcome
dissect is a faster (and less functional) alternative to grok. That dissect filter will extract the timestamp from [message] and set (for example) [@metadata][timestamp] to "Aug 13 06:31:36". The date filter will then parse that and set [@timestamp]
Hey @Badger I just tested this config on my Dev Env and the filter and config is working as expected. Thank you very much for your help! One more thing I would like to ask is that I tried modifying the dissect filter according to the following scenario (introduced an additional space in between the date and month):
Aug 13 01:00:05 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)
Aug 13 01:01:10 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)
Aug 13 01:02:04 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)
Modified the filter by introducing a space in the filter, but it did not work as expected. Thus, can you guide how this modification works, or should i go through the official documentation and that should be enough?
Once again, greatly thankful!
If there are always two spaces there you could have two spaces in both the dissect and date filters. You can also handle padding in the dissect filter using %{[@metadata][timestamp]->} in which case it replaces multiple separators with a single one.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.