I have just discovered that ElasticSearch expects them to be in UTC. As they are not, they are in local time, when I visualize the data with Kibana, all dates are shifted 4 hours -the different between my local time and UTC-.
So I was wondering what is the best way to convert the content of those fields to be in UTC when dumping the data into ElasticSearch:
Should I use the LogStash Filter plugin "Alter" to change their values? Maybe the Ruby plugin?
Is there a way to use the LogStash Filter plugin "Date" to change them from local to UTC?
Or is it actually possible to do it thru the ElasticSearch Template, telling it that the dates are coming in Local Time format and letting ElasticSearch to do the conversion to UTC itself?
I don't feel 2. and 3. are actually possible, and I should go for 1.
But no harm in asking, in case I am missing something.
Hi @Badger, thanks a lot for such a prompt response.
I read this in the documentation
The date filter is used for parsing dates from fields,
and then using that date or timestamp
as the logstash timestamp for the event.
The part I am concerned is "and then using that date or timestamp as the logstash timestamp for the event."
I already have another field acting as timestamp.
I don't want any of the new ones, after converting them to UTC using filter plugin Date, to suddenly become the timestamp.
Or that can be avoid simply by using option target?
Would something like this work?
filter{
date {
match => ["my_datefield_1", "MM/dd/yy HH:mm:ss.SSS", "MM/dd/yy HH:mm:ss"]
target => ["my_datefield_1"]
}
}
In that case, the documentation is a little bit misleading, as it makes you believe that my_datefield_1 would become the new timestamp for the document in ElasticSearch...
One thing I should mention is that the target of the date filter is of type LogStash::Timestamp
"my_datefield_1" => 2019-08-01T20:55:48.719Z,
So elasticsearch will (perhaps once you roll to a new index) store it as a date type, and Kibana will give you the option (using a dropdown, if I recall correctly) of using it as a timestamp, but will default to using @timestamp.
That works currently since "my_datefield_1", as created by LogStash, is a string with one of those formats.
Once I start using the new LogStash configuration, that converts "my_datefield_1" into a LogStash::Timestamp type, I will need a new Template, without the format, right?
"my_datefield_1": {
"type": "date"
},
But the new Template will start being used when a new index is created. As usual, it will be the next day. Indices are like "my_index-<YYYY.MM.DD>
I am not sure how to make ElasticSearch to accept both the old type (string with a given format) and new type (timestamp) for "my_datefield_1" during the day of the transition. Is it possible?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.