_dateparsefailure reported in tags even after date is matched

Hi,

I am using date filter to convert the field to date type. Here's how the field looks like:
subdata_proc_date: 24 Jun 2018 14:47:14,891

The date filter I've written for this is:

    date {
      match => [ "subdata_proc_date", "dd MMM yyyy HH:mm:ss,SSS", "d MMM yyyy HH:mm:ss,SSS" ]
      target => "subdata_proc_date"
    }

Which seems to be a correct match but somehow it is failing.

If the date filter fails to parse a timestamp it'll log a message with additional details. Unless... the timestamp comes from a jdbc input, then you might be running into another problem.

@magnusbaeck I am not sure what's breaking the parsing. Here's the json format of the log:

"@version": "1",
    "offset": 33771972,
    "message": "24 Jun 2018 14:47:14,891  INFO IpdrImportJob:48 - \tMarked effective end on 944 old devices",
    "effective_end_old_devices": 944,
    "tags": [
      "onm-server-subdata-recs-stat",
      "beats_input_codec_plain_applied",
      "_dateparsefailure"
    ],
    "prospector": {
      "type": "log"
    },
    "@timestamp": "2018-06-24T19:47:19.913Z",
    "subdata_proc_date": "24 Jun 2018 14:47:14,891 ",
    "source": "/opt/vault/server/logs/server.log",
    "host": "ip-112-118-0-119"
  },
  "fields": {
    "@timestamp": [
      "2018-06-24T19:47:19.913Z"
    ],
    "zimbra_proc_time": [
      ""
    ],
    "csg_proc_time": [
      ""
    ],
    "perftech_proc_time": [
      ""
    ]
  },

As you can see, everything is getting parsed correctly. And still I am seeing _dateparsefailure. Also the field type is being reported as text and not date.

One more strange thing, @timestamp in the above json shows date as 24 whereas normally, in kibana, it shows June 25th 2018, 01:17:19.913.

I'm talking about Logstash's own log.

There are no logs related to this in logstash logs.

I find that hard to believe. But I see that your subdata_proc_date ends with a space. The date filter is pretty picky so you probably have to include that space in the date pattern or you'll have to remove the space.

Gotcha! Thanks, will try removing the space.

@magnusbaeck I am curious that why is Grok debugger not showing that space. I just tested the existing grok pattern and that field does not contain a space.

WIthout knowing what your configuration looks like I have no idea.

Here's the grok pattern I am using:

%{TIMESTAMP:subdata_proc_date}%{SPACE}%{SPACE}%{LOGLEVEL:log_level}%{DATA}Marked effective end on %{NUMBER:effective_end_old_devices} old devices

Log entry to parse:

23 Jun 2018 14:51:32,342  INFO IpdrImportJob:48 - 	Marked effective end on 2279 old devices

Custom TIMESTAMP regex:

TIMESTAMP ^%{DATE}\s%{MONTH}\s%{YEAR}\s%{TIME}

Result of the grok:

{
  "log_level": "INFO",
  "subdata_proc_date": "23 Jun 2018 14:51:32,342",
  "effective_end_old_devices": "2279"
}

Weird. I can't explain that space.

@magnusbaeck I am not sure how but space disappeared. But despite using date filter, it is getting reported as text.

I am facing this issue since 3 days. None of my date filters are reporting the field correctly. All are getting reported as text only. And yes, the date filter is exactly the same as I've posted at the beginning with the question.

This is what I am seeing in json - "subdata_proc_date": "2018-06-08T13:54:36.342Z But the type is still text.

An ES index's mapping of a field can't ever change. You need to reindex to get a new mapping. Once that's done (you can use ES's get mapping API to verify that the mapping matches the expectations) you probably need to refresh the field list in Kibana.

@magnusbaeck Agreed. But what if I've set a new index on daily basis. In other words, my indices are of the default filebeat format.

Does that mean I'd have to reindex daily at least at once in order to make the text field as a date?

Because I've been doing this for 3 days. I'll explain what I've been doing from last 3 days:

Every day a new index is generated in format - filebeat-version-yyyy-mm-dd
Every day, I've to create a new index by specifying the date fields and their format. Once that is done I am reindexing the old index data into the new one and delete the older one.

I am not sure if this is the best procedure to do this. Is there any better way you can suggest?

A default Elasticsearch setup will index a string containing "2018-06-08T13:54:36.342Z" as a timestamp. If that doesn't happen for you I'm not sure what's up. Maybe an index template that disables dynamic mappings or something?

To rule out some factors, try using ES's REST API to create a new index (e.g. filebeat-whatever-2018-06-30) containing a single document:

{"subdata_proc_date": "2018-06-08T13:54:36.342Z"}

What do the mappings of that index look like?

It was inserted correctly but with type as keyword:

          "subdata_proc_date": {
            "type": "keyword",
            "ignore_above": 1024
          },

Just to confirm whether it is inserted:

GET filebeat-6.2.4-2018.06.25-test/_search

gives

{
  "took": 0,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 1,
    "max_score": 1,
    "hits": [
      {
        "_index": "filebeat-6.2.4-2018.06.25-test",
        "_type": "doc",
        "_id": "vx-GOGQB3tEACYuyUVty",
        "_score": 1,
        "_source": {
          "subdata_proc_date": "2018-06-08T13:54:36.342Z"
        }
      }
    ]
  }
}

Also, I've kept ES and logstash config pretty much as it is.

Looks like I've found something in mapping "date_detection": false. Would the default date_detection mechanism detect this timestamp or I'd need to specify the format separately?

Looks like I've found something in mapping "date_detection": false.

Yeah, that's the problem.

Would the default date_detection mechanism detect this timestamp or I'd need to specify the format separately?

It'll be detected just fine.

I am not if this is due to new changes from ES 6.x or this is the intended behavior but I am not able to change the date_detection to true. However, setting the types for my date fields in a custom template and applying them to filebeat-* indexes worked.

Thanks @magnusbaeck for the help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.