[Logstash][jdbc input plugin]sql_last_value with timestamp type support milliseconds?

Hi,
I try to collect my data from mysql (or cubrid) to elasticsearch and hadoop. (test with logstash 7.5.2 on my local windows 10 desktop)

And found sql_last_value is not worked as i expected.

Its my config

input {
  jdbc {
	jdbc_driver_library => "E:\Work\Elastic\logstash-7.5.2\mysql-connector-java-5.1.42.jar"
	jdbc_driver_class => "com.mysql.jdbc.Driver"
	jdbc_connection_string => "~"
        jdbc_user => "~"
	jdbc_password => "~"
	jdbc_paging_enabled => true
	jdbc_page_size => 10
	jdbc_default_timezone => "Asia/Seoul"
    schedule => "*/1 * * * *"
    statement => "SELECT * FROM fds_hard_rule_check_log WHERE check_ymdt > :sql_last_value ORDER BY hard_rule_check_log_seq"
	use_column_value => true
	tracking_column => "check_ymdt"
	tracking_column_type => "timestamp"
  }
}

filter {
	sleep {
		time => "1"
	}
}

output {
  elasticsearch {
	hosts => ["~"]
	index => "fds_hard_rule_check_log"
	document_id => "%{hard_rule_check_log_seq}"
  }
}

and logstash work like this

but the check_ymdt field in mysql is timestamp(3).
ls2

so logstash always select with check_ymdt > 2020-02-07 16:39:28 and the row with 2020-02-07 16:39:28.212 was always selected duplicated.

i found select query is not support milliseconds. but i found .logstash_jdbc_last_run file, it saved milliseconds info.

--- !ruby/object:DateTime '2020-02-07 16:39:28.212000000 +09:00'

I just wonder its a bug or just not support milliseconds yet.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.