Logstash jdbc - unnable to connect to database - plugin restart

We're running Logstash with jdbc plugin as a Kubernetes cron job. Due to some network problem, eventually it cannot connect to database:

[ERROR][logstash.inputs.jdbc     ] Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLRecoverableException: IO Error: Unknown host specified "}

And then, plugin is restarted:

[ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.

Fail and restart cycle can be repeated several times. When the connection is finally succeeded and the pipeline has terminated, Logstash seems to be stalled and the Kubernetes pod doesn't succeed nor fail.

I can add an option like connection_retry_attempts => 10, but that doesn't garantee that, at the end of those attempts, if connection fails, the plugin will be restarted.

It seems to me it would be better to get an error and finish Logstash. Is it possible, in some way, to prevent the plugin from being restarted?

    jdbc {
       jdbc_connection_string => "..."
       jdbc_user => "..."
       jdbc_password => "..."
       jdbc_validate_connection => true
       jdbc_driver_library => "/opt/ojdbc6.jar"
       jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
       last_run_metadata_path => "..."
       tracking_column => "..."
       tracking_column_type => "timestamp"
       statement_filepath => "..."
    }

Sorry about the "unnable" in the title. Unfortunately, I am unable to edit and correct it.

Not sure if this is correct but I would try to add a schedule. That way it will try connect based on that schedule. Once an hour or day, whatever you set it for.

There is no schedule by default. If no schedule is given, then the statement is run exactly once.

@aaron-nimocks, thank you for the answer. But let me try to explain better.

The original idea was really to run the statement exactly once. The schedule is actually dictated by the Kubernetes cron job. At scheduled time, it creates a Logstash Docker instance, Logstash is started and the pipeline is run only once.

In normal operation, when the pipeline has terminated, the Logstash Docker instance is also finished, succeeded or not (eg, with some error). Once it is finished, the cron job is able to create another instance next scheduled time.

With the problem described (after some "unable to connect"/"plugin restart" cycles) the Logstash Docker instance continues running after pipeline has terminated and, at the next scheduled time, Kubernetes can not create another instance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.