date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
target => "logdate"
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
target => "logtime"
}
What happens is that i can view the fields logdate and logtime in kibana, but not in a format that i need for further processing. what i can do to solve it, is to edit the fields in kibana to meet my requirements.
that works, so far so good. But what i want to know is how to solve that issue without editing the fields with kibana? I want to have those fields with the correct formatting directly delivered to elk/kibana.
basically Elasticsearch stores them as type date which internally is just a timestamp of milliseconds. So the way it is outputted is really a question of the visualization layer and you already figured out correctly how you can change that in Kibana. If you would want Elasticsearch to return you preformatted dates, you would need to store them as strings, but that means losing all the sorting, querying, visualizing, etc. capabilities on it like a date object.
In my opinion, there are not too many reasons, why you would want to store a meaningful date field (also) as a formatted string, instead of letting the actual output layer take care of that.
i see what you mean and meanwhile i found a way to convert it to my needs in perl after it is stored in elastic as timestamp / date and not change it anywhere. Thanks for clarify.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.