Date filter dosen't work


(Ahmed HADDAD) #1

hello why my date filter dosen't work, i've seen a lot of question here and they helped me to optimise my configuration, but date filter still dosen't work :

my column1 still of type String and i still got a @timestamp seapratly with today is time

 input{ 

file{

path => "C:\Users\GeeksData\Desktop\ElasticSerach\tablelogg.csv"
start_position => "beginning"
sincedb_path =>NUL

}
}

filter{

csv{
	separator => ";"
	columns => ["column1","column2","column3","column4","column5"]

	}

  date {

		match => ["column1","dd/MM/yyyy HH:mm"]
		target => "@timestamp"
		 }

mutate{convert => ["column2","integer"]}


}

output{

elasticsearch{ 
hosts => "localhost"
index => "indice6"
document_type => "donnèes"

}
stdout { codec => rubydebug }




}

#2

what does your rubydebug output look like?
In my configuration it looks like this (but I have other time format)

	date 
	{
		id => "ulog:date:logTime"
		match => ['logTime', 'YYYY-MM-dd HH:mm:ss']
		timezone => "Europe/Berlin"
		#remove_field => ['logTime']
	}

(Ahmed HADDAD) #3

This is what the console show i m using windows i can't copy it all
"column1" => "12/07/2017 16:02",
"@timestamp =>"2017-07-12T14:02:000Z",
........


(Magnus Bäck) #4

This is what the console show i m using windows i can't copy it all
"column1" => "12/07/2017 16:02",
"@timestamp =>"2017-07-12T14:02:000Z",
........

That looks correct.


#5

@timestamp is shown / stored in utc.
you should set your timezone explicitly.


(Ahmed HADDAD) #6

what i want is the columne1 ("which represent the time in my case"), convert from string to date.
I'have read some posts here , may be i didn't understood them well, mentioning that i should create a date filter then add a target to timestamp so the timestamp would be the time of column1.


(Ahmed HADDAD) #7

Why timezone, what i want is the column1 being seen as a date instead of string thats all.


(Magnus Bäck) #8

what i want is the columne1 ("which represent the time in my case"), convert from string to date.

You can use the date filter to parse the original date and store it back to the same field (use the target) option. Then ES will auto-detect that field as a timestamp, but that will only happen when you recreate the index since the mapping of an index's field can't be changed.

You can also set the mapping explicitly to have ES accept the original field value as a timestamp.


(Ahmed HADDAD) #9

store it back to the same field , you mean like this :

date{
      match => ["column1","dd/MM/yyyy HH:mm"]
	target => "column1"


     }

?


(Magnus Bäck) #10

Yes.


#11

timezones are important, especially if you have logs where the dates are stored in different time zones.
In our application, which we are monitoring with elastic stack, some logs have times stored in utc, other logs are using local time.

When I set the exact timezone for each log, then the date filter is converting correctly to utc.
When querying a time interval in kibana, then all log entries / documents are shown correctly referring to my browsers timezone. So I don't need to remember which log is stored in which time zone as kibana user, because all documents are 'moved' to utc.


(system) #12

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.