Problem With Date Parse - Suddenly Started Last Night

Hi All,

Very new to the ELK stack and have set up a server to ingest all my syslogs and firewall logs.

Up until last night at exactly 6:59 PM all my syslogs were being ingested and the date field was being parsed out, even though I was getting a _dateparserror tag on my syslog Elasticsearch entries. For some reason, last night at 6:59:59, all hosts that aren't sending an ISO8601 type of date format are now getting parse errors and the documents all appear to be being discarded.

This is what I'm seeing in the logstash-plain.log:

[2020-09-18T16:05:34,833][WARN ][logstash.outputs.elasticsearch][main][14956b1512991c457313d511f58c1e48be189a385472ceb87b1701e089de8765] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"syslog-2020.09.18", :routing=>nil, :_type=>"system_logs"}, #LogStash::Event:0x588151a0], :response=>{"index"=>{"_index"=>"syslog-2020.09.18", "_type"=>"system_logs", "_id"=>"eE_SonQBwN2n-kChcffw", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [syslog_timestamp] of type [date] in document with id 'eE_SonQBwN2n-kChcffw'. Preview of field's value: 'Sep 18 16:05:12'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [Sep 18 16:05:12] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}

The relevant filtering in my /etc/logstash/conf.d is below:

filter {
  if "syslog" in [tags] {
    grok {
      match => { "message" => "%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
      target => "syslog_timestamp"
    }

(Note: I added the line target => "syslog_timestamp" today as a result of what I found on a Google search, but it hasn't fixed the issue.)

Based on what I can see in the log above, the filtering should match the date Sep 18 16:05:12. I'm at a loss as to why, all of a sudden last night, it would have stopped ingesting the logs.

Has anyone seen this before? Any idea what I might be doing wrong here?

Thanks in advance.

OK, so elasticsearch will not be able to interpret that as a date with the default parsers. So if elasticsearch thinks [syslog_timestamp] has type date you will get exactly the mapping exception that you show.

The type in elasticsearch is either set by a tempate, or by the first document that indexes a particular field. Suppose that for 90% of your events [syslog_timestamp] is a string, but for 10% it is a date. 90% of days (assuming daily indexes) the first event to reach elasticsearch will have [syslog_timestamp] with type string, and any subsequent events where it has type date will be accepted, with the date indexed as a string.

However, on 10% of days the first event to reach elasticsearch will have [syslog_timestamp] with type date, and any subsequent events with [syslog_timestamp] as a string will get a mapping exception. At 6:59:59 the next day, it may magically fix itself, as it rolls over to a new index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.