Need help regarding the timestamps of logs pushed into ElasticSearch through instance of Logstash

Hello!

The issue that I am facing is that I need to manually push some application logs into ES, through Logstash. I am using a separate instance of Logstash to do so, but the place where I am stuck is related to the timestamp of the logs when they are visualized in Kibana. It shows the timestamp when the logs are pushed into ES (since Kibana uses @timestamp field), rather than the timestamp at which the logs were generated. Some googling led me to know that the @timestamp field is populated by Logstash when it pushes the logs to ES index. Thus, there are currently 2 apparent solutions for this (searched through Google :P) :

  1. Extract the timestamp from the logs (through a GROK filter) and then put the value into the @timestamp field (Tried several methods, none of them worked as expected)
  2. Extract the timestamp from the logs, put them in a custom field (e.g. logtimestamp) and then push them into ES. Then configure Kibana to order the logs on the basis of the logtimestamp field, instead of the default @timestamp field (not possible due to several reasons)

Thus currently stuck in this. Any and all help will be appreciated. Plus, I was a bit confused where to post this (whether Elasticsearch or Logstash). Thus, if this is not the right place to post, let me know. Thanks!

This is a logstash question, so this is the right place to post. Modifying the [@timestamp] field using a date filter is the normal way to do this.

What do your logs look like?

Well they are of different formats, but generally I can say that they have the syslog format as follows:

Aug 13 06:31:36 fp-app1 systemd[1]: Starting Daily apt upgrade and clean activities...
Aug 13 06:31:39 fp-app1 systemd[1]: Started Daily apt upgrade and clean activities.
Aug 13 08:42:45 fp-app1 systemd[1]: Starting Cleanup of Temporary Directories...
Aug 13 08:42:45 fp-app1 systemd[1]: Started Cleanup of Temporary Directories.
Aug 13 16:55:36 fp-app1 systemd[1]: Starting Daily apt download activities...
Aug 13 16:55:37 fp-app1 systemd[1]: Started Daily apt download activities.

And I appreciate you replying in swift manner! But if you can guide me how to modify the @timestamp field, I'll be more than happy to try out the method and let you know of the outcome

You could use

    dissect { mapping => { "message" => "%{[@metadata][timestamp]} %{+[@metadata][timestamp]} %{+[@metadata][timestamp]} %{}" } }
    date { match => [ "[@metadata][timestamp]", "MMM dd HH:mm:ss" ] }

Note that since the timestamp does not include the year logstash will have to guess which year you want and sometimes it may get that wrong.

Thank you for your reply! But I have 2 questions:

  1. Using this approach, wouldn't I need to configure Kibana to use the @metadata field, instead of @timestamp field, to order the logs according to time?
  2. Plus, for the @metadata field, I would need a GROK filter to extract the time from the logs and then put them in @metadata field?

dissect is a faster (and less functional) alternative to grok. That dissect filter will extract the timestamp from [message] and set (for example) [@metadata][timestamp] to "Aug 13 06:31:36". The date filter will then parse that and set [@timestamp]

Sounds awesome then! Can you let me test this in my Dev Env and let me get back to you? Plus, a great big thanks for your swift input!!

Hey @Badger I just tested this config on my Dev Env and the filter and config is working as expected. Thank you very much for your help! One more thing I would like to ask is that I tried modifying the dissect filter according to the following scenario (introduced an additional space in between the date and month):

Aug  13 01:00:05 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)
Aug  13 01:01:10 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)
Aug  13 01:02:04 fp-app1 rkhunter: Rootkit hunter check started (version 1.4.6)

Modified the filter by introducing a space in the filter, but it did not work as expected. Thus, can you guide how this modification works, or should i go through the official documentation and that should be enough?
Once again, greatly thankful!

If there are always two spaces there you could have two spaces in both the dissect and date filters. You can also handle padding in the dissect filter using %{[@metadata][timestamp]->} in which case it replaces multiple separators with a single one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.