Error registering plugin, Pipeline aborted due to error (<TypeError: can't dup Fixnum>), Failed to execute action


#1

Hi, everyone:
I'm a beginner on ELK and trying to load data from mysql to elasticsearch(for next step i want query them via javarestclient), so i used logstash6.2.4 and elasticsearch6.2.4. and followed a example here.
when i run: bin/logstash -f /path/to/my.conf, i got error:

[2018-04-22T10:15:08,713][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/testdb\", jdbc_user=>\"root\", jdbc_password=><password>, jdbc_driver_library=>\"/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM testtable\", id=>\"7ff303d15d8fc2537248f48fae5f3925bca7649bbafc30d2cd52394ea9961797\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_f8d44c47-8421-4bb9-a6b9-0b34e0aceb13\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"/Users/chu/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"can't dup Fixnum", :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,256][ERROR][logstash.pipeline        ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<TypeError: can't dup Fixnum>, :backtrace=>["org/jruby/RubyKernel.java:1882:in `dup'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date/format.rb:838:in `_parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1830:in `parse'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:87:in `set_value'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:216:in `register'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:342:in `register_plugin'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `register_plugins'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:500:in `start_inputs'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:394:in `start_workers'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:290:in `run'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:250:in `block in start'"], :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,314][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}

here is the testdbinit.conf:

input {
  jdbc { 
    jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
    jdbc_user => "root"
    jdbc_password => "mypassword"
    jdbc_driver_library => "/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    statement => "SELECT * FROM testtable"
    }
  }
output {
  stdout { codec => json_lines }
  elasticsearch {
  "hosts" => "localhost:9200"
  "index" => "testdemo"
   document_id => "%{personid}"
  "document_type" => "person"
  }
}

here is the table(database:testdb--->table:testtable):

I try to google the issue and search on stack overflow, but still have no clue; I think maybe some type conversion errors(TypeError: can't dup Fixnum ) cause this issue, but how to solve them? And one more thing also confused me is: I run the same code yesterday, and succeed loaded data into elasticsearch and I could also search them via localhost:9200, but next morning when I try the same thing, I met these issues. I have tossed this a whole day, please help me get some hints.


Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction
(sathish) #2

Any help on this?
Same error
[2018-04-25T11:34:31,810][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<TypeError: can't dup Fixnum>, :backtrace=>["org/jruby/RubyKernel.java:2029:in dup'", "/xxxx/xxxx/elastic/logstash-5.2.1/vendor/jruby/lib/ruby/1.9/date/format.rb:833:in_parse'", master/lib/logstash/plugin_mixins/value_tracking.rb:87:in `set_value'",


#3

So the problem seems to be the sql_last_value which is initialized as 0 for numeric values or 1970-01-01 for datetime columns, if there is none at the last_run_metadata_path. (See value_tracking.rb:

def initialize(handler)
      @file_handler = handler
      set_value(get_initial)
end
...
def get_initial
      @file_handler.read || 0
end
...
def get_initial
      @file_handler.read || DateTime.new(1970)
end

)

I'm not sure, but I guess in your cases there might be a file with invalid content, so the parsing fails. You could check the file's content and maybe delete it to see if that helps?


#4

Hi Jenni,
Thanks for your help. I add clean_run => true in conf file and run logstash again, no more error occurs.
So I think: in my issue, sql_last_value stored in last_run_metadata_path is not the numeric value or datetime value, after clean_run => true reseted the wrong value to 0 or 1970-01-01 , the thread goes on and data be indexed successfully, am I right?


#5

Hi snatesa1
In my case, I add clean_run => true in conf file to reset sql_last_value, the data was indexed successfully.
Maybe this would help you.


#6

Seems so. Basically a "Have you tried turning it off and on again?" situation :smiley:


#7

:rofl: indeed lol


(sathish) #8

Thank you Jenni and Chu for your help:).
clean_run solved my problem.


(sathish) #9

Hi Jenni/Chu,
Now am getting a strange error .After adding the clean_run to true the first query was executed.
I tried to run other queries after the first statement.But even after changing the
"[logstash.inputs.jdbc ] Java::ComSybaseJdbc4Jdbc::SybSQLException: SQL Anywhere Error -131: Syntax error near 'LIMIT' on line 1: SELECT count(*) AS "COUNT" FROM (select @@servername) AS "T1" LIMIT 1"
jdbc {
jdbc_connection_string => "jdbc:sybase:Tds:server:port"
jdbc_user => "xxxx"
jdbc_password => "xxxxx"
jdbc_driver_library => "path/jconn4.jar"
jdbc_driver_class => "com.sybase.jdbc4.jdbc.SybDriver"
jdbc_fetch_size => 1000
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
#statement_filepath => "path/query.sql"
statement => "select @@servername"
#use_column_value => true
#tracking_column => "id"
clean_run => true
last_run_metadata_path => "path/.logstash_jdbc_last_run"
}
}
output {
stdout { codec => json_lines }
#stdout { codec => rubydebug }


#10

I can't really offer you a clear solution for that problem, but it has to do with the paging. That's what generates the faulty statements with the "T1" part that is causing an error.

In this thread people had the same problem, but with MariaDB:


(sathish) #11

Hi Jenni.. Some how I fixed the problem..thank you..


(system) #12

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.