Do not update sql_last_value when an Elastic error is encountered

input {
    jdbc {
        jdbc_driver_library => "/usr/share/logstash/drivers/jdbc/${PG_DRIVER}"
        jdbc_driver_class => "org.postgresql.Driver"
        jdbc_connection_string => "jdbc:postgresql://${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_NAME}?user=${DATABASE_USER}"
        jdbc_user => "${DATABASE_USER}"
        jdbc_password => "${DATABASE_PASSWORD}"
        schedule => "* * * * *"
        statement_filepath => "/usr/share/logstash/inputs/query.sql"
        clean_run => true
        last_run_metadata_path => "/usr/share/logstash/.logstash_jdbc_last_run"
    }
}

My logstash.cgf file uses the above input to query a postgres database and send the results to Elastic. The query itself uses the sql_last_value to only ingest new data. However, if logstash loses the connection to elastic the sql_last_value continues to update resulting in lost data when the elastic connection is reestablished.

Is there anyway to keep sql_last_value from updating when an elastic error is encountered?

Thank you in advance,
K Smith

No. The input updates sql_last_value when the query is complete, it has no way to know what happens at the other end of the pipeline.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.