If you want to change the format in kibana then kibana lets you do that. If you want to store a string in a different format in elasticsearch then use a ruby filter and strftime. If you want to change the format in which elasticsearch stores dates then you cannot do so.
The format string in Logstash's date filter determines how the input data is interpreted. The filter converts the input into a format that Elasticsearch understands natively.
So, if your input looks like 12:34:56 20/02/2020 (February 20th, 2020), the format string should be HH:mm:ss dd/MM/yyy.
Apologies for the typo in my previous message. Please try again with this format string in the mapping: HH:mm:ss dd/MM/yyyy. Please not that this is the mapping used for the Elasticsearch index. With the correct mapping in place there, the string from the original log message can be interpreted correctly in Elasticsearch, and no further transformation is needed in e.g. Logstash.
I'd also add some other commonly used formats into the format string, which will help with the queries generated by Kibana. Date fields can only be queried in one of the formats defined in the format string. So, for example the full format string might look like:
Hi badger dateandtime is a field that I join, before that exists separated time and date fields, so I was thinking maybe to do this procedure only to the date field, do you think my code below is correct?
"Why don't he test this by himself" you will think.....today is 03/03/2020 so I wont notice any change
If I remember correctly, if you event.set a field that already exists then it becomes an array, so you would want to event.remove("date") between the event.get and event.set.
The "Time.at(t.to_f).strftime" assumes that t is a LogStash::Timestamp. You need to use a date filter to parse date (and overwrite it) then the ruby filter should work.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.