JDBC Input set jdbc_default_timezone loses precision


(Salvador G. Aguallo Perez) #1

So I'm facing a duplicate info issue because of a malformed SQL query on JDBC input type.

I saw this issue (https://github.com/logstash-plugins/logstash-input-jdbc/issues/140) but I'm still getting that bug.

The last sql value is saved like this:

# cat "/usr/share/logstash/jdbc_last_run/.logstash_jdbc_last_run_AffiliateTransactionConversionDate"
--- !ruby/object:DateTime '2018-07-10 13:19:02.466835200 -04:00'

And the query is executed like this:

[2018-07-10T13:20:07,327][INFO ][logstash.inputs.jdbc     ] (3.676289s) SELECT * FROM SELECT.... FROM [Transaction] T (NOLOCK) WHERE T.conversionDate > '2018-07-10T17:19:02.466') AS [T1] ORDER BY 1 OFFSET 0 ROWS FETCH NEXT 20000 ROWS ONLY

As you see, the precision is truncated to 3 digits after seconds, while the saved value has 7 digits (in fact it has 9 but last two are always 00). The row in MSSQL has also 7 digits after the seconds.

This is a big problem because it leads to duplicated info.

Am I doing something wrong? Is there any way to get this fixed?

Best regards,
Salvador Aguallo


(Christian Dahlqvist) #2

Elasticsearch only supports millisecond precision for timestamps, which could explain what you are seeing.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.