My goal is to set up continuous data migration from MySQL to Elasticsearch.
I've got stuck with :sql_last_value.
Logstash logs it in local timezone, so next time it queries db, :sql_last_value value is wrong.
My local timezone is (CEST, +02:00).
So if the last received datetime field from db was like 2018-07-20T00:57:34.000Z, next query to db will be for :sql_last_value = 2018-07-20 02:57:34 and it won't get any of recently updated records.
sql_last_value is always shifted to local timezone in .logstash_jdbc_last_run, so in next db query :sql_last_value doesn't match actual db datetime records.
The only way I found to make scheduled incremental updates working is to set my local timezone to +00:00.
Maybe there is a possibility to correct sql_last_value by manually shifting it back for given number of hours in logstash.conf before making statement request?
I have a database and all its timestamps are in local time
Logstash/Elasticsearch/Kibana timestamps are always UTC
my local time is +1:00 off of UTC, meaning a db time of 2pm is 1pm UTC
I set this setting to an appropriate timezone string, e.g. "Europe/Paris"
the plugin will convert any date/time/timestamp values into UTC when it is receiving data
the plugin will convert any date/time/timestamp values int local time when it is sending data.
the plugin stores the sql_last_value as UTC
my Logstash event timestamps are in UTC
at the next run, when the sql_last_value is used in the SQL statement, the plugin converts the sql_last_value timestamp into "Europe/Paris" local time before it inserts it in the statement.
my sql_last_value has 1pm but on my db server the logged statement shows 2pm.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.