Fail to connect logstash 5.6.0 to Oracle with jdbc input

Hello,

I fail to connect my logstash v5.6.0 to an Oracle 12c database. I tried both ojdbc6.jar and ojdbc7.jar
I checked that the jdbc_driver_class is Java::oracle.jdbc.driver.OracleDriver (with uppercase "J")

I am running on a CentOS 7.3.1611

Here is the jdbc input config:
jdbc {
jdbc_driver_library => "ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:@//stardb:1521/GLX"
jdbc_user => "usr"
jdbc_password => "pwd"
schedule => "* * * * *"
statement => "select sysdate from dual"
last_run_metadata_path => "jdbc_metadata.data"
type => "glx_audit"
}

And an extract of the log files:

Sep 24 20:14:00 JLD logstash: 218952 Java::oracle.jdbc.driver.OracleDriver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Sep 24 20:14:00 JLD logstash: 218952 /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.2.4/lib/logstash/plugin_mixins/jdbc.rb:157:in `open_jdbc_connection'

Can someone help on this issue?
Many thanks

Have you tried putting the absolute path to the jar file in jdbc_driver_library?

Hi @magnusbaeck,

As soon as I configure an absolute path in jdbc_driver_library, logstash restarts at the time of executing the query.
There is no error in the logstash log files but i have one in the journalctl:

Sep 25 07:56:00 JLD.CentOS-Elasticstack logstash[2066]: LoadError: no such file to load -- /etc/logstash/drivers/ojdbc7
Sep 25 07:56:00 JLD.CentOS-Elasticstack logstash[2066]: require at org/jruby/RubyKernel.java:1040
Sep 25 07:56:00 JLD.CentOS-Elasticstack logstash[2066]: require at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyg
Sep 25 07:56:00 JLD.CentOS-Elasticstack logstash[2066]: load_drivers at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc
Sep 25 07:56:00 JLD.CentOS-Elasticstack logstash[2066]: each at org/jruby/RubyArray.java:1613

My logstash is started with systemd, and the only way i found to get rid off this error is by putting the ojdbc jar files in the service WorkingDirectory defined in the systemd configuration file (file located at /etc/systemd/system/logstash.service). Here is the content of this file:

[Unit]
Description=logstash

[Service]
Type=simple
User=logstash
Group=logstash
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

Hi @magnusbaeck,

It sounds that i had permissions issues on the ojdbc files.

Strange things though:
In the logstash log files, i can see regular jdbc connecion failure but still my query (select sysdate from dual) is executed and the result sent to ES (actually duplicated).

[2017-09-25T11:11:00,064][INFO ][logstash.inputs.jdbc     ] (0.002000s) select sysdate from dual
[2017-09-25T11:11:28,182][WARN ][logstash.inputs.jdbc     ] Failed test_connection. {:exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLRecoverableException: IO Error: Unknown host specified >}
[2017-09-25T11:11:28,183][WARN ][logstash.inputs.jdbc     ] Exception when executing JDBC query {:exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLRecoverableException: IO Error: Unknown host specified >}
[2017-09-25T11:11:28,184][WARN ][logstash.inputs.jdbc     ] Attempt reconnection.
[2017-09-25T11:11:28,204][WARN ][logstash.inputs.jdbc     ] Failed test_connection. {:exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLRecoverableException: IO Error: Unknown host specified >}

Any idea on that? I saw logs are in WARN, is this something that can be ignored somehow?

I am wondering what could be the reason to get duplicates in the ES with a query such as "select sysdate from dual". The extract below is the result of:
curl http://localhost:9200/logging-*/_search?pretty=true
just after the first exection of the query.

{
  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "logging-2017.09.25",
        "_type" : "%{[@metadata][type]}",
        "_id" : "AV64Tc5SyTZFeXS2Hp6O",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "sysdate" : "2017-09-25T09:10:36.000Z",
          "@timestamp" : "2017-09-25T09:11:00.066Z",
          "type" : "glx_audit"
        }
      },
      {
        "_index" : "logging-2017.09.25",
        "_type" : "%{[@metadata][type]}",
        "_id" : "AV64Tc4ryTZFeXS2Hp6N",
        "_score" : 1.0,
        "_source" : {
          "@version" : "1",
          "sysdate" : "2017-09-25T09:10:36.000Z",
          "@timestamp" : "2017-09-25T09:11:00.066Z",
          "type" : "glx_audit"
        }
      }
    ]
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.