Logstash-jdbc-input plugin issue when time reverts from Daylight Savings to Standard Time

Hello,

I use logstash with jdbc input plugin to create an Elasticsearch index and keep it synced with a Database table.

My database's timezone is Asia/Nicosia and so I've set jdbc_default_timezone accordingly. The date for :sql_last_value is then converted to UTC when saved to conf_last_run.log file and back to Asia/Nicosia timezone when read from the logfile.

So far, this configuration worked fine, and I could see the date being recorded in the logfile after every execution, but I ran into an issue when the time changed from Daylight Savings to Standard Time. Logstash seems to stop at the exact page that contains data after the time changed to Standard Time.
i.e. Logstash read a date from the logfile that is before the change (+3 hours added to the date)
and retrieved records from DB, the last of which had DATE_UPDATED after the change of time (so it should do -2hours when saving to file).

Bur after reading from DB, nothing happens. The conf_last_run.log file is not updated, and records read from DB are not updated in Elasticsearch. There are no visible errors/warnings in logfiles. Logstash just keeps reading the same records from DB (since the logfile is not updated) and doing nothing.

If I stop logstash and set the recorded date right after the time changed, then it works fine.

Below is a sample of logstash's configuration.

Any idea what might be the issue?

Thanks

input {
	jdbc {
	        jdbc_driver_library => "/pathToDriver/sqljdbc42.jar"
                jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://host:port;databaseName=DB_NAME"
		jdbc_user => "user"
		jdbc_password => "password"
		statement => "
			SELECT DATE_UPDATED, MY_ID, NAME 
			FROM MY_TABLE 
			WHERE DATE_UPDATED > :sql_last_value"
		jdbc_page_size => 100
		jdbc_paging_enabled => true
		schedule => "*/1 * * * *"
		tracking_column => "date_updated"
		tracking_column_type => "timestamp"
		jdbc_default_timezone => "Asia/Nicosia"
		use_column_value => true
		last_run_metadata_path => "/pathToLogFile/conf_last_run.log"
		record_last_run => true
	}
}
filter {
    ...
}
output {
	elasticsearch {
		hosts => ["host:port"]
		index => "MyTableIndex"
		document_id => "%{[my_id]}"
	}
}
1 Like

Hi Alexandros, does it work correctly if you try using the unix timestamp in seconds as demonstrated here: How to keep Elasticsearch synced with a RDBMS using Logstash and JDBC | Elastic Blog

[That article mentions the following: "A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone."]

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.