Error after v5.6 upgrade from v5.5

Hi guys,

We just recently upgrade our Elastic Stack from v5.5 to v5.6. We encountered the error below when we tried running logstash to populate the new v5.6 indices. Any idea what's going on here? We already re-installed the input jdbc plugin in logtash.

[2019-10-29T15:06:01,970][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Jdbc jdbc_driver_library=>"~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar", jdbc_driver_class=>"", jdbc_connection_string=>"jdbc:sqlserver://;user=;password=;", jdbc_user=>"", jdbc_password=>, jdbc_validate_connection=>true, jdbc_validation_timeout=>-1, last_run_metadata_path=>"/etc/logstash/.logstash_jdbc_something_type_ref_last_run", clean_run=>true, statement=>"SELECT * FROM [dbo].

", type=>"somethingtyperef", id=>"e2d1740c363567586234b229d04384c304dc871c-40", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_d3065ef2-da96-4e23-b8e4-9e517e5438d3", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 +0000}, use_column_value=>false, tracking_column_type=>"numeric", record_last_run=>true, lowercase_column_names=>true>
Error: not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?

A workaround for the failure to load the driver might be this. Read through the whole thread before implementing.

Thanks @Badger!

Additional information:

input {
jdbc {
jdbc_driver_library => "{JDBC_DRIVER_PATH:/home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar}" jdbc_driver_class => "" jdbc_connection_string => "jdbc:sqlserver://server;user=username;password=password;" jdbc_user => "username" jdbc_password => password" jdbc_validate_connection => true jdbc_validation_timeout => -1 last_run_metadata_path => "{LOGSTASH_METADATA_PATH:/etc/logstash}/.logstash_jdbc_profile_last_run"
clean_run => true
statement => "SELECT * FROM dbo.Table ORDER BY Id"

As you can see above, jdbc_driver_library uses the value of {JDBC_DRIVER_PATH} environment variable which is **~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar**. This path is not recognized for some reason. The error did not happen again when I removed the {JDBC_DRIVER_PATH} env variable in the conf file and hard-coded the path to /home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar.

I will try changing the value of the ${JDBC_DRIVER_PATH} to replace ~ with /home/ubuntu

Currently trying to populate the employee index with the below settings:

input {
  jdbc {
    jdbc_driver_library => "~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar"
    jdbc_driver_class => ""
    jdbc_connection_string => "jdbc:sqlserver://SERVER;user=USER;password=PASSWORD"
    jdbc_user => "DB_USER"
    jdbc_password => "DB_PASSWORD"
    jdbc_validate_connection => true
    jdbc_validation_timeout => -1
    statement => "SELECT * FROM [dbo].Employee ORDER BY ID"
    type => "employee"
filter {
output {

NOTE: filter and output sections of the conf file is purposely blank


sudo /usr/share/logstash/bin/logstash -f /home/ubuntu/Employee-pipeline.conf --path.settings /etc/logstash/ /var/lib/logstash_new

RESULT enter image description here Looks like logstash does not know or don't have access to ~/sqljdbc...*.jar

I also confirmed that the mssql-jdbc-6.2.1.jre8.jar exists enter image description here

However, when I changed the path to /home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar , it runs successfully.

So ~/ is the same as /home/ubuntu

This started to occur after upgrading our Elastic Stack from v5.5 to v5.6. Also, note that this does not occur if we run the same conf file with the logstash service.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.