Syslog filter issue with timestamp

Greetings,

I'm having an issue where my syslog is parsing the wrong date format. I get the following in Elastic: yyyy-11-Mo 11:37:17, while I get the following error in logstash:

[2021-11-29T11:34:43,644][WARN ][logstash.outputs.Elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pci-syslog-2021.11.29", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x1f92caa9], :response=>{"index"=>{"_index"=>"pci-syslog-2021.11.29", "_type"=>"_doc", "_id"=>"grSMbH0BNakrN0jJTJaw", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id 'grSMbH0BNakrN0jJTJaw'. Preview of field's value: 'Nov 29 11:34:11'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Nov 29 11:34:11] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}

Here is my filter.conf

filter {
  mutate {
    rename => { "indice" => "[@metadata][indice]" }
    rename => { "subindice" => "[@metadata][subindice]" }
    rename => { "doctype" => "[@metadata][doctype]" }
  }

  #############################################################
  #                     FILTER SYSLOG                         #
  #############################################################
  if [@metadata][subindice] == "syslog" {
    mutate {
      add_field => { "dateTime" => "%{@timestamp}" }
      convert => { "severity" => "string" }
      convert => { "facility" => "integer" }
      convert => { "priority" => "integer" }
    }
  }

Also, if I expand my entry in Elastic, my dateTime seems correct.
dateTime Nov 29, 2021 @ 11:37:17.627

Any idea?

Thank you,

Hi

I think you didn't specified any format for dateTime in the mapping, so by default Elasticsearch wait to get a date with the format strict_date_optional_time or epoch_millis.

According to you and logs, the dateTime contains the value Nov 29, 2021 @ 11:37:17.627 in the logstash output and it does not match either of the two required formats.

So i think you have two possibilities.

  • First is to edit the mapping in Elasticsearch to specify the incomming date are in syslog date format. example here
  • Or to use the date filter in logstash to make the change form syslog date to strict_date_optional_time directly in logsatsh.

Cad

Everything @Cad said applies, but note that the issue is the field called [timestamp], so that is the mapping that you would need to change.

syslog timestamps have no year, so logstash uses heuristics to guess the year (if today is in January and the month in the timestamp is in December then assume it is from last year and so on). I do not know if Elasticsearch date parsing does the same. If not, a date filter might be a better bet than a mapping update.

Thank you both for your answers. I tried using the date filter plugin as suggested, but I still get the same result. I'm definitely doing something wrong, but I can't figure out what exactly.

Here is my current config:

  if [@metadata][subindice] == "syslog" {
    date {
      match => ["timestamp","MMM dd yyyy HH:mm:ss","ISO8601"]
    }

    mutate {
      add_field => { "dateTime" => "%{@timestamp}" }
      convert => { "severity" => "string" }
      convert => { "facility" => "integer" }
      convert => { "priority" => "integer" }
    }
  }

I also set the same format in my template but I still get the same.

[2021-12-02T14:50:49,458][WARN ][logstash.outputs.Elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pci-syslog-2021.12.02", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x947f7c8], :response=>{"index"=>{"_index"=>"pci-syslog-2021.12.02", "_type"=>"_doc", "_id"=>"9fGyfH0BNakrN0jJ2UsE", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date] in document with id '9fGyfH0BNakrN0jJ2UsE'. Preview of field's value: 'Dec 2 14:50:05'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Dec 2 14:50:05] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}

Thank you,

Mathieu

Try adding target => "timestamp" to that, so that it overwrites the [timestamp] field instead of setting [@timestamp].

Not quite sure to understand exactly what you mean. Is this what you mean?

  if [@metadata][subindice] == "syslog" {
    date {
      target => ["timestamp","MMM dd yyyy HH:mm:ss","ISO8601"]
    }

    mutate {
      add_field => { "dateTime" => "timestamp" }
      convert => { "severity" => "string" }
      convert => { "facility" => "integer" }
      convert => { "priority" => "integer" }
    }
  }

Thanks,

No, I was suggesting

date {
    match => [ "timestamp", "MMM dd HH:mm:ss" ]
    target => "timestamp"
}

I did try that after a few attempts, however I had the year. I just tried the way you said and I'm still getting the same error. Could it be my template that's messed up?

properties" : {
        "dateTime" : {
          "format" : "MMM dd HH:mm:ss",
          "type" : "date"
        },
        "timestamp" : {
          "type" : "date"
        }

Sorry for the struggle, I'm not the one who implemented, and I'm not sure he knew what he was doing. I need to fix it before we get our PCI audit.

Thanks again,

Mathieu

Which field is the error message about?

"error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [timestamp] of type [date]

Should I put the same format to timestamp?

If you are using

I cannot imagine how that error message could occur.

Is it possible that I need to allow more than one format? I'm getting syslog from linux VMs (centos 7 mostly) and from our vmware appliances (photon) coming from loginsight. We are using the loginsight codec plugin. Would that be the cause? I thought about doing seperate indices, but that's the way it was initially implemented.

Do you think it would be easier to create a new index for vmware only, or allowing multiplate format should do the trick?

Again, I really appreciate your help.

Mathieu

That is possible yes. The illegal_argument_exception message shows the format of the data, for example

{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Dec 2 14:50:05] with format [strict_date_optional_time||epoch_millis]",

If you are seeing exceptions with a different format you will need to have the date filter include that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.