Logstash storing JDBC DB results set in some cache and not updating data set from DB

Dear Team,

KIndly find my losgtash conf file:

input {

    jdbc {
        jdbc_validate_connection => true
        jdbc_connection_string => "jdbc:oracle:thin:@"
        jdbc_user => "AXIA_SPRINT_DEV"
        jdbc_password => "AXIA_SPRINT_DEV"
	    jdbc_fetch_size => 2000
        #jdbc_paging_enabled => true
        #jdbc_page_size => 20000
        jdbc_driver_library => "D:\Apeksha\logstash-5.4.0\Oracle_JDBC_Driver\ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
		statement => "select id, name, city,  to_char(dater, 'DD-MON-YYYY HH12:MI:SS') as dater_time from logstash_try WHERE to_char(dater, 'DD-MON-YYYY HH12:MI:SS') > to_char(CURRENT_DATE - interval '7' day, 'DD-MON-YYYY HH12:MI:SS')
		AND to_char(dater, 'DD-MON-YYYY HH12:MI:SS') > TO_CHAR(TO_DATE(:sql_last_value ,'DD-MON-YYYY HH12:MI:SS'), 'DD-MON-YYYY HH12:MI:SS') ORDER BY dater_time"

		use_column_value => true
		tracking_column => "dater_time"
		tracking_column_type => "timestamp"
	#clean_run => true 
		jdbc_paging_enabled => "true"
    	jdbc_page_size => "50000"
		jdbc_default_timezone => "Asia/Kolkata"
    	last_run_metadata_path => "C:\Users\apeksha.bhandari\.logstash_try_001"
		schedule => "*/5 * * * * *"

output   {
		stdout { codec => json }

		elasticsearch {
		hosts => [""]
		index => "india_30"		
		document_id => "%{id}" 
		retry_on_conflict => 3	
    file {
                codec => json_lines
                path => "D:\Apeksha\logstash-5.4.0\india_30.log"

Issue is that after running query once, logstash stores these entries somewhere in cache, and in the next scheduled query hit, doesnt take the updated rows. Is there any way to clean this cache?
Same queries when run in oracle give correct output



The plugin will persist the sql_last_value parameter in the form of a metadata file stored in the configured last_run_metadata_path . Upon query execution, this file will be updated with the current value of sql_last_value . Next time the pipeline starts up, this value will be updated by reading from the file. If clean_run is set to true, this value will be ignored and sql_last_value will be set to Jan 1, 1970, or 0 if use_column_value is true, as if no query has ever been executed.

-- Logtash: JDBC Input Plugin - State



  • Value type is string
  • Default value is "$HOME/.logstash_jdbc_last_run"

Path to file with last run time

-- Logtash: JDBC Input Plugin -last_run_metadata_path

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.