Logstash 7.2.0 input jdbc plugin restarts itself and inserts data continuously in elastic

I am trying to import data from oracle db to elastic search, the process is happening with logstash in windows but when i run the same in a unix box it fetches the data from table then says [Error] [logstash.javapipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <Logstash::Inputs::Jdbc jdbc_user=>""...

Error: No such file or directory --/.logstash_jdbc_last_run
Exception: Errno::ENOENT
Stack: org/jruby/RubyIO.java:1236:in sysopen' org/jruby/RubyIO.java:3796:in write'

Conf file:

    input {
         jdbc  {
            jdbc_connection_string => 'jdbc:oracle:thin:@<host>:<port>/<service name>'
            jdbc_user => '<user>'
            jdbc_driver_library=>'<absolute path to ojdbc8.jar>'
            statement => 'select * from mytable'
    output {
        stdout {
             codec => json_lines

I didnot understand the points that with the same downloaded logstash-7.2.0 why the same conf file runs good on windows while it runs in an infinite loop in unix RHEL 7 server boxes

You are relying on the default path for the last run metadata.

config :last_run_metadata_path, :validate => :string, :default => "#{ENV['HOME']}/.logstash_jdbc_last_run"

I suggest you set it explicitly.

you don't need this line. this is causing a problem.
copy ojdbc8.jar to "/usr/share/logstash/logstash-core/lib/jars/ojdbc8.jar"
also add "clean_run=>true" in your conf file and it will not use that .logstash_jdbc_last_run

Thanks Badger, I have tried to set it explicitly and now it is working. Don't know if restarting logstash again and again is a bug from logstash.

Anyways thanks Badger

Actually Sachin I have downloaded a "gz" file and unzipped it so do not have those paths.
It seems the last_run_metadata_path was the problem

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.