Failure mechanism in logstash

Hi Folks,

I am trying to push data from oracle to ES or rabbitmq and I am able to do that as well. My problem when I am pushing those data and that time my ES or rabbitmq goes down then I am not able to figure out that from where I need to push that data because my last_run_metadata file also gets updated.

input {
jdbc{
 jdbc_validate_connection => true
 jdbc_driver_library => ""
 jdbc_driver_class => "Java::oracle.jdbc.OracleDriver"
 jdbc_connection_string => "myconnectionstring"
 jdbc_user => "username"
 jdbc_password => "password"
 statement => "SELECT * FROM auto_increment_tb where id>:sql_last_value"
 use_column_value=>true
 tracking_column=>id
 schedule => "* * * * * *"
 last_run_metadata_path => "C:\Pramod\rnd\ElasticSearch\logstash-7.2.0\.logstash_jdbc_last_run"
}

}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "auto_increment_tb"
}
stdout{
codec => rubydebug
}
}

I just want that is there any method or something so that I can update my last_run_metadata file on a conditional basis or is there any better solution?

Pls help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.