The 1st two statements are from Database1 and the 3rd statement is from a different Database.
The results for the 1st two statements are always correct.
However the output for the third query seems to be wrong....We are getting data even from the past even though i have provided tracking column and looking for data only for the last 1 day.
Is the above config correct ?
Do I need to make any changes to run the sql statements on different databases ?
By default, Logstash JDBC input stores the last value in the path $HOME/.logstash_jdbc_last_run - which is a simple text file. So the sql_last_value from the first SQL is stored and used by the second SQL and so on. I don't know why you only have problems with the last SQL but the solution is to set last_run_metadata_path:
Create a directory like $HOME/.logstash_jdbc_last_run
Configure last_run_metadata_path for each SQL to point to a separate file within this directory
I think I know what the problem is as I fell over the same problem when I started using JDBC input: Your tracking_column is in uppercase but by default the JDBC input converts all column names to lowercase.
Can you please try to either change the tracking_column to lowercase or try setting lowercase_column_names to false?
The difference is that the SQL currently returns columns like transaction_date, callnumber, status, ... the jdbc input then gets the value of the column TRANSACTION_DATE which cannot be found as the column was converted to lowercase => therefore, JDBC input stores the default value of a date which is 1970-01-01 00:00:00.000000000 Z.
After converting the value of tracking_column to lowercase the JDBc input will correctly read the last value of transaction_date and store e.g. 2020-09-23 09:20:13.000000000 Z
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.