Logstash Error while replicating Data from Oracle to Elasticsearch

Hi

I am using Elasticsearch version 7.8.0 , Logtash version 7.8.0
RDBMS : Oracle

I have configured Logstash to move data from Oracle to Elasticsearch of particular table on continuity basis.

Logstash.conf file -- >

   jdbc {
     jdbc_driver_library => "/mnt/logstash-7.8.0/config/ojdbc10.jar"
     jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
     jdbc_connection_string => "jdbc:oracle:thin:@10.10.10.1:1521/orcl"
     jdbc_user => "scott"
     jdbc_password => "scott"
     jdbc_validate_connection => true
     jdbc_paging_enabled => true
     use_column_value => true
     tracking_column_type => "timestamp"
     tracking_column => "unix_ts_in_secs"
     clean_run => true
     last_run_metadata_path => "/mnt/logstash-7.8.0/config/employee_b1-sql_last_value.yml"
   # schedule => "*/15 * * * * *"
     statement => "select employee_id,employee_name,data_in_bytes,mime_type,mime_sub_type,created_date,status,UNIX2TS(last_updated_date) as unix_ts_in_secs from employee where last_updated_date > :sql_last_value and last_updated_date < systimestamp order by last_updated_date asc"
   }
 }

 filter {
   mutate {
     copy => { "employee_id" => "[@metadata][_id]"}
     remove_field => ["employee_id", "@version", "unix_ts_in_secs"]
   }
 }

 output {
  # stdout { codec => json_lines }
   elasticsearch {
     hosts => ["http://10.10.10.2:9200"]
     manage_template => false
     index => "employee_b1"
     document_id => "%{[@metadata][_id]}"
     user => "elastic"
     password => "elastic"
   }
 }```


When i run this config  , it runs properly but does not terminated and logstash is running in infinite loops and I am getting below error message :

  [2020-09-10T14:26:02,881][ERROR][logstash.javapipeline    ][main][6a31b22bffc61689e18837f701c826ce6cdd536cc374713273e8a1ad4aaee7a1] A plugin had an unrecoverable error. Will restart this plugin.
           Pipeline_id:main
           Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"scott", last_run_metadata_path=>"/mnt/logstash-7.8.0/config/employee_b1-sql_last_value.yml", 
           jdbc_paging_enabled=>true, tracking_column_type=>"timestamp", use_column_value=>true, tracking_column=>"unix_ts_in_secs", 
           jdbc_validate_connection=>true, jdbc_password=><password>, statement=>"select employee_id,employee_name,data_in_bytes,mime_type,mime_sub_type,created_date,status,
           UNIX2TS(last_updated_date) as unix_ts_in_secs from employee where last_updated_date > :sql_last_value 
           and last_updated_date < systimestamp order by last_updated_date asc", clean_run=>true, jdbc_driver_library=>"/mnt/logstash-7.8.0/config/ojdbc10.jar", 
           jdbc_connection_string=>"jdbc:oracle:thin:@10.10.10.1:1521/orcl", id=>"6a31b22bffc61689e18837f701c826ce6cdd536cc374713273e8a1ad4aaee7a1", 
           jdbc_driver_class=>"Java::oracle.jdbc.driver.OracleDriver", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_06dd4efa-2fcc-40ae-8dc2-93b1b0b9a98f", 
           enable_metric=>true, charset=>"UTF-8">, jdbc_page_size=>100000, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", 
           connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, plugin_timezone=>"utc", record_last_run=>true, lowercase_column_names=>true, 
           use_prepared_statements=>false>
           Error: no implicit conversion of BigDecimal into String
           Exception: TypeError
           Stack: uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date/format.rb:335:in `_parse'


Can you please help me in understanding the reason of this error so that i can fix it.

Thank You 
Tushar Nemade

I got the culprit

the " UNIX2TS(last_updated_date) " column output is not inline with logstash. Need to convert properly.

Could you please tell me how to convert Oracle timestamp column into Unix Timestamp "epoch" one ...

Thanks
Tushar Nemade

this is fixed. now ...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.