Is there any way to update the `last_run_metadata_file` in the output plugin?

My scenario is as follows: I am using the JDBC input plugin with a tracking column and last_run_metadata_file, and it is working as expected. However, when the ETL host is down or unreachable, the filter API throws an error. Now, I want to update the last_run_metadata_file after indexing, not after SQL query execution. Is there any way to achieve this?

No. You cannot make the input depend on the successful output of an event.

Thanks. Your sentences are almost correct, but I've made a small adjustment for clarity:

I am new to Logstash. The JDBC input is not returning anything, yet my file in the output plugin is getting updated at the same time as my configuration pipeline is executing. I am passing the value to write in the file, which is coming from the database, i.e., (datetime %{dt}), but this value does not exist. So why is the output plugin updating the file if the value is not present? The second question is, if there is a 404 encounter in retry_failure in the Elasticsearch filter plugin, does the retry functionality not work?"
my snipet is like below

last_run_metadata_path => <filepath/sql_last_value.yml
statement_filepath => <filepath/query.sql>
schedule => "*/1 * * * *"
hosts => "host:port"
index => indexname
retry_on_satus => [400,404,403,503,500,504]
retry_on_failure => 5
query_template => <filepath/query_template.json>
username =>
password =>
add_field => {"[additionalInfo][created]" => "%{dt}"}
file {
path => <filepath/sql_last_value.yml>
codec => line { foramt => "datetime '%{[additionalInfo][created]}'"}
write_behavior => "overwrite"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.