Configure JDBC Input plugin for logstash

I am exploring my knowledge on configuring input plugins to my logstash. I have a docker container with elk running on it. I was successfully able to create metric beats and fetch logs from a REST API and create visualizations.
Next I am trying is inputting logs from an SQL server using JDBC input plugin for logstash. What I came across in several blogs to achieve this was to get the sqljdbc jar file and an input and output configuration for logstash which has details about connection string, user, password and query details.

input {
jdbc {
jdbc_driver_library => "c:/work/sqljdbc_6.4.0.0_enu.tar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_url => "jdbc:sqlserver://IPADDRESS:11.0.77.2;databasename=testdb"
jdbc_user => "root"
jdbc_password => "root"
statement => "select * from testdata"
schedule => "0 * * * *"
}
}
output {
elasticsearch {
protocol => http
index => "sqldata"
hosts => ["elasticsearch:9200"]
}
}

My initial struggle was how to push this configuration to logstash as my logstash is not separately in a container it is within the elk docker container. I found a command to do that which is mentioned as below, but after running that I am getting error which says host is unreachable for elasticsearch (error details mentioned below).

docker run --rm -v simple-out.conf:/usr/share/logstash/config docker.elastic.co/logstash/logstash:6.2.2

Error received after logstash runs:

[2018-04-11T08:30:39,953][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-04-11T08:30:39,981][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-04-11T08:30:40,000][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}

Is there anyone who can suggest me what am I missing/or doing incorrectly.

Inside the Logstash container there's no host named elasticsearch. Containers that are added to the same Docker network can access each other using the container names so that's something you can explore.

I resolved the error, I was giving an incorrect command to update the config file. Instead I used the following command.

docker cp simple-out.conf elktest:/etc/logstash/config.d

The above command made the docker container run, without any errors, but I can't see any logs from sql server. Do I need to do anything else?

Do the Logstash logs indicate that the query ran at all? You may have to increase the log level.

How would I know the query ran? How can I increase log level, are my config values correct?

How would I know the query ran?

Look in the Logstash log. It'll be obvious.

How can I increase log level, are my config values correct?

The log level can be set via logstash.yml or a command-line option. See the documentation.

I cant see any logs that indicate the query has ran, in fact there are no logs generated.

Will I have to copy the "sqljdbc_6.4.40.0_enu.tar" to my docker container where logstash configuration files reside. I suppose the path given for this file is in my windows directory structure and logstash config files might be trying to find it in bash?

I did copy the jar file inside the /etc/logstash path and then ran the docker container, but no logs generated.

Have you disabled the schedule option so it'll attempt to make the query right away? Are you able to read the Logstash logs at all?

I did disable schedule option, but I am not able to see any logs. Following is what it shows:

==> /var/log/logstash/logstash-plain.log <==
[2018-04-11T13:33:37,670][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
[2018-04-11T13:33:37,691][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
[2018-04-11T13:33:38,417][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-04-11T13:33:39,062][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.2"}
[2018-04-11T13:33:39,805][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-11T13:33:43,379][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost], manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", id=>"c4ee5abcf701afed0db36d4aa16c4fc10da6a92bbd615d837cccdf2f368b7802", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_fedd1aa6-cc79-4dc1-aed4-f605fbd6ef66", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-04-11T13:33:43,503][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-11T13:33:44,207][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-04-11T13:33:44,215][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-04-11T13:33:44,438][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-04-11T13:33:44,686][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-04-11T13:33:44,689][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-04-11T13:33:44,714][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2018-04-11T13:33:45,533][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-04-11T13:33:45,740][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-04-11T13:33:45,744][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3c3b7b65 sleep>"}
[2018-04-11T13:33:45,913][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}

If you enable debug logging Logstash will tell you exactly what configuration it loads. Make your jdbc input is listed there.

I enabled debugging in logstash, but still no logs detected. I have beat also configured for this elk container, and fetches data if I run the metricbeat (from REST API). Is it possible that metricbeat and jdbc plugin both won't work for same elk container?

I enabled debugging in logstash, but still no logs detected.

Then you have to fix that. You can't operate a service without being able to read its logs. Which Docker image is this? How are you starting the container? Where are you looking for the logs?

I have beat also configured for this elk container, and fetches data if I run the metricbeat (from REST API). Is it possible that metricbeat and jdbc plugin both won't work for same elk container?

No, that's extremely unlikely.

By the way, your config file with the jdbc input shouldn't contain an elasticsearch output unless you wrap it in a conditional that selects only the events from the jdbc input. Same thing with any other filters and outputs in your other configuration files. Unless you run multiple pipelines the contents of all configuration files is concatenated. Sorry for the terse description but it's an extremely common misunderstanding and I lack the patience to explain it in detail every time.

The docker image I have is "docker pull sebp/elk" (this has elk/kibana/logstash - 6.2.2 all in one container). To start the container : I use Visual Studio Code to restart container, stop container and to see the logs. I also check the logs in Kibana to confirm if it creates any index, but cannot find them there as well.

I updated the logstash config.d file by opening the shell for the docker container and copying the file to the location where config.d exists (/etc/logstash). Also I copied the sqljdbc.jar file to the same location. I know I am missing (doing it wrong) some configuration, but cannot find what it is.

Can you please help me understand what tags/properties should I have in my output? (I know its a very basic question but I am a newbie to docker and elk, so getting confused with all the blogs available for configuring this plugin)

Finally I can see something .. I now have following error which seems to be just an error with incorrect driver reference. If anyone knows the correct driver reference?

2018-04-12T10:19:24,907][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Jdbc jdbc_driver_library=>"/opt/logstash/logstash-core/lib/jars/mssql-jdbc-6.4.0.jre9.jar", jdbc_driver_class=>"com.microsoft.sqlserver.jdbc.SQLServerDriver", jdbc_connection_string=>"jdbc:sqlserver://IPADDRESS:1.0.0.1;databasename=testdb", jdbc_user=>"root", jdbc_password=><password>, statement=>"select * from testdata where id=11", jdbc_paging_enabled=>true, jdbc_page_size=>20000, id=>"0eb81a1f657fd4d05e933df6bd7f6e449ab77987ac8858be0c0e05f06280cead", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_fec7e690-f46e-4138-9dde-02f9e4a7f46c", enable_metric=>true, charset=>"UTF-8">, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 UTC}, last_run_metadata_path=>"/opt/logstash/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
  Error: com.microsoft.sqlserver.jdbc.SQLServerDriver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
  Exception: LogStash::ConfigurationError
  Stack: /opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.3/lib/logstash/plugin_mixins/jdbc.rb:159:in `open_jdbc_connection'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.3/lib/logstash/plugin_mixins/jdbc.rb:227:in `execute_statement'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.3/lib/logstash/inputs/jdbc.rb:271:in `execute_query'
/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.3/lib/logstash/inputs/jdbc.rb:256:in `run'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:516:in `inputworker'
/opt/logstash/logstash-core/lib/logstash/pipeline.rb:509:in `block in start_input'

I was copying the sqljdbc jar file and the config file in incorrect locations. Because I had a container which had elasticsearch, logstash and kibana all in one container, the directory structure for files was different. I had to place jar file in : /opt/logstash/logstash-core/lib/jars and I had to keep my simple-out.conf in /etc/logstash/conf.d/ directory.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.