Convert field containing a timestamp from string to a date / time or timestamp?

Hello

As you can observe here, I have a two fields containing date and time:

image

Id like to change it so Elastic stores it as a date/time and maybe even change the order (yyyy/mm/dd hh/mm ) How can I change this?

My logstash config:

input {
  http {
    port => 5057
  }
}


filter

{

  if [host] != "172.16.2.201"

 {

        drop { }

   }

 }

filter {
csv {
source => "message"
columns => [
"Timestamp",
"EventType",
"TimeOfConnection",
"SessionID",
"SessionName",
"NumberOfHostsConnected",
"LoggedOnDomain",
"LoggedOnUser"
]
}

}




output
{

    elasticsearch {

hosts => ["localhost"]
user => ["elastic"]
password  => ["mypass"]



index => "myndex-%{+yyyy.MM.dd}"

        }

How can I get it to convert it to a timestamp ?

You can use the date filter to convert your data to your desired output.

OK but in my example, how would I use it?

Are you saying this would work:

filter {
      date {
        match => [ "Timestamp", "dd/MM/yyyy H:mm:ss" ]
      }
    }

??

Just tried it and it doesnt.

Add a target property if you want to keep the same field name. If not it will save that as @timestamp.

filter {
 date {
  match => [ "Timestamp", "dd/MM/yyyy H:mm:ss" ]
  target => "Timestamp"
 }
}

So for me it would be:

input {
  http {
    port => 5057
  }
}


filter

{

  if [host] != "172.16.2.201"

 {

        drop { }

   }

 }

filter {
csv {
source => "message"
columns => [
"Timestamp",
"EventType",
"TimeOfConnection",
"SessionID",
"SessionName",
"NumberOfHostsConnected",
"LoggedOnDomain",
"LoggedOnUser"
]
}

}

filter {
 date {
  match => [ "TimeOfConnection", "dd/MM/yyyy H:mm:ss" ]
  target => "TimeOfConnection"
 }
}


output
{

    elasticsearch {

hosts => ["localhost"]
user => ["elastic"]
password  => ["mypass"]



index => "myndex-%{+yyyy.MM.dd}"

        }

Would this be correct?

That seems to still store it as a string.

What is your mapping?

      "date": {
        "type":   "date"
      }

I do not understand your question sorry

Every index has an associated mapping where you tell it what the data is going to be in the index and what type it is.

I receive the data from logstash and send it to Elasticsearch from Logstash. I do not do anything with the index except name it.

When you first write to an index if a mapping does not exist it will create one for you. Since your data type was a string initially it's most likely mapped as a keyword/text.

If you don't have any important data in that index then the easiest fix is to delete that index and then let it automatically create a new one and should pick that up as a date now.

OK, let me give it a shot and delete the Kibana index.

I dont think it worked because when it says "Select a primary time field for use with the global time filter." it doesnt allow me to select that field....

Although it picks it up as a string, the format is different:

image

Did you create an index pattern? You might need to refresh/delete that also before bringing in new data.

When you asked me to delete the index, that is what I did: Delete the Kibana index pattern.

Did you delete your index also? If not then you still have data in your old format that elastic will think is a string and assign it text for mapping.

The best way to do this is to delete index and pattern. Then manually set your index mapping. Then ingest the new data.

Oh OK, I understand you know, sorry.

Could you clear up what you mean by:

"Then manually set your index mapping."

Thanks

You can create a mapping associated to an index and tell it what fields and data types to expect. Then when data comes in it will map it correctly.

Your date field would look like

"TimeOfConnection": {
 "type": "date"
}