Convert String to date format

Hi All,

I am using ELK7.6.2

Trying to convert "createdTime" field from String to date format. Added following filter in my logstash config file:

date {
match => ["createdTime", "YYYY-MM-dd HH:mm:ss.SSS"]
target => "createdTime"
}

In the log 'createdTime' shows as follows:

createdTime	2020-12-16 12:35:37,677

Index mapping still shows as 'text' as opposed to 'date' after the change as follows:

  "createdTime" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256

Please guide.

Thanks

What is the output of your filter?

Can you stdout { } in the output and see what the result of the filter was?

Thanks. The output is as follows. A snippet showing createdTime field is shown below:

            "clientIP" => " 1.2.3.4 ",
      "httpClientThreadName" => "  ",
    "concurrentUserRequests" => 41,
             "requestPageId" => " abcd ",
           "mPulseSessionId" => "  ",
              "createdTime" => "2020-12-16 13:37:41,692 ",

Adding an empty space after each entry shouldn't be happening. My first guess is that due to this space it's not a valid date anymore and the mapping thinks it's a string.

Just curious if clientIP is mapped as an IP and working also.

clientIP appears normal and correct IP is displayed. Mapping as follows:

  "clientIP" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },

Also where should the space be introduced. This is how the config for 'createdTime' field looks.

if [type] == "uat_log_access"  {
                mutate {
                        split => ["message", "~|~"]
                        add_field =>{
                                "createdTime" => "%{[message][0]}"

I don't see it there. But it would be something common to all fields since they all show an extra space.

In addition to the issue with spaces, your createdTime field has a comma after the seconds, not a period.

Thanks, I fixed the comma part and now stdout looks as follows. Note the createdTime field ( and also the un-wanted extra space) :

      "toGPFirstByte" => 25003,
                  "referrer" => " https://abc/review.html ",
             "gPPerformance" => " 2 ",
            "totalTimeProxy" => 0,
      "httpClientThreadName" => "  ",
                      "type" => "uat_log_access",
             "requestPageId" => " ABCD ",
                    "gPName" => " LAMB ",
                    "method" => " POST ",
                 "WebServer" => " web1 ",
              "toGPLastByte" => 25004,
               "createdTime" => "2020-12-16 14:57:53,515 ",

Wonder where the extra space is configured..

Well it's a leading and trailing space. createdTime doesn't have a leading so I am guessing the Date filter fixed it. Also type doesn't have one which is set separetly than the message.

Just look in your main dissect, grok, or however you are parsing the main message that comes in.

Thanks. grok looks as follows:

grok{
                        patterns_dir => ["/opt/app/logstash/patterns"]
                        match => [ "IPRequestId","%{GREEDYDATA}%{REQUEST_ID:requestId}" ]
                }

It is pointing to a patterns directory which contains a file named 'postfix'.

postfix file contains the following:

ACCESS_LOG_SPERATOR (\~\|\~)
REQUEST_ID [0-9]{1}\-[0-9]{2}\-.*\-.*\-[0-9,#]+
POSTFIX_QUEUEID [0-9A-F]{10,11}
time [0-9]{4}\-[0-9]{2}\-[0-9]{2}\s[0-9]{2}:[0-9]{2}:[0-9]{2}\S[0-9]{3}
FTBC \-[FTBC]{2}\-

Is there a way I could add an expression to trim the extra spaces?

You can strip the fields if you can't find the source of the issue.

Thank you! When I stripped the "createdTime" field the time now displays as follows:

"createdTime": "2020-12-17T19:17:24.453Z",

But on running GET for index mapping I still see it "text" as follows:

        "createdTime" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },

Also when I examine the index pattern, it still shows as "string" :

which is unlike another field named "timestamp" which shows as "date". See below:

Did you delete the index pattern before running again?

What does your finished Date Filter look like in Logstash?

So I followed the following sequence:

  1. Shutdown Logstash
  2. Deleted related index pattern in Kibana
  3. Bring up Logstash
  4. Searched for same index pattern again and created it. Observed no change in mapping.

I don't have any finished date filter defined in logstash config

Your first post has this. Does that still exist?

date {
match => ["createdTime", "YYYY-MM-dd HH:mm:ss.SSS"]
target => "createdTime"
}

yes it does:

                mutate {
                        strip => ["createdTime"]
                       }

                date {
                        match => ["createdTime", "YYYY-MM-dd HH:mm:ss,SSS"]
                        target => "createdTime"
                     }

For testing. Change target => "createdTime" to target => "tempDate" and check the mapping and what does the data look like?

Changed it to tempDate. Now looks like:

   mutate {
                        strip => ["createdTime"]
                       }

                date {
                        match => ["createdTime", "YYYY-MM-dd HH:mm:ss,SSS"]
                        target => "tempDate"
                     }

On checking after deleting the index pattern, tempDate appears as "date" format. See below:

image

      "tempDate" : {
          "type" : "date"
        },

Deleting the index pattern does nothing, since the pattern just reflects whatever mapping exists in elasticsearch. When you recreate the pattern it will reflect the text mapping of the field you want to be a date.

You need to delete the index in elasticsearch, or start with a new index.