Logstash date filter don't parse time

I have a log in msi installer format (time only, without date):

MSI (c) (F0:8C) [09:00:05:007]: Client-side and UI is none or basic: Running entire install on the server.
MSI (c) (F0:8C) [09:00:05:007]: Grabbed execution mutex.
MSI (c) (F0:8C) [09:00:05:007]: Cloaking enabled.
MSI (c) (F0:8C) [09:00:05:007]: Attempting to enable all disabled privileges before calling Install on Server
MSI (c) (F0:8C) [09:00:05:007]: Incrementing counter to disable shutdown. Counter after increment: 0

I want to get a time substring from the log and merge it with the current date.
This is my filters config:

filter {
  if "curl" in [tags] {
    date {
      locale => "en"
      match => ["message", "HH:mm:ss:SSS"]
      timezone => "+0200"
      target => "logtime"
    }
    mutate {
         add_field => {"logdate" => "%{+YYYY-MM-dd}"}
         add_field => {"tmp_field" => "%{logdate}_%{logtime}"}
    }
    date {
      locale => "en"
      match => ["tmp_field", "YYYY-MM-dd_HH:mm:ss:SSS"]
      timezone => "+0200"
      remove_field => ["logdate","logtime","tmp_field"]
    }
  }
}

When loading logs into logstash, the filter does not work and I get the _dateparsefailure tag for each event.
What am I doing wrong?

Firstly, the pattern in a date filter has to match the entire field. In your first date filter you are only matching a small sub-string of [message], so it will not work and will add a _dateparsefailure tag. You could try

dissect { mapping => { "message" => "%{}[%{logtime}]:%{}" } }

or you could pick it out using grok.

Secondly, when you supply an option to a filter more than once they are merged. The order in which they are evaluated depends on the logstash version. In some version [logdate] will be inserted into [tmp_field] before it is set, and in other versions it will be referenced after it is set. If you want to control the order use two mutate filters.

mutate { add_field => {"logdate" => "%{+YYYY-MM-dd}"} }
mutate { add_field => {"tmp_field" => "%{logdate}_%{logtime}"} }

However, you do not need two mutates, you can do both sprintf references in one

mutate { add_field => {"tmp_field" => "%{+YYYY-MM-dd}_%{logtime}"} }

Note that you can create temporary fields in the [@metadata] field. These are available to the logstash pipeline but ignored by the output

mutate { add_field => {"[@metadata][timestamp]" => "%{+YYYY-MM-dd}_%{logtime}"} }

Then you do not need to use remove_field in the date filter.

Thanks for the answer!
I edited the config according to the recommendations:

if "curl" in [tags] {
    dissect {
      mapping => { "message" => "%{}[%{logtime}]:%{}" }
    }
    mutate {
      add_field => {"[@metadata][timestamp]" => "%{+YYYY-MM-dd}_%{logtime}"}
    }
    date {
      locale => "en"
      match => ["[@metadata][timestamp]", "YYYY-MM-dd_HH:mm:ss:SSS"]
      timezone => "+0200"
    }
  }

After that, a logtime field was created in the logs with a time substring from message.
But the @timestamp field still contains the loading times of the files in logstash, the date filter did not work.
Also, after adding dissect, the logs began to be loaded into elasticsearch for about an hour (previously it took a few seconds).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.