Issue with sqlserver data import into ES (hosted on Elastic Cloud) by using logstash jdbc input!

We have a project to use Elasticsearch as our analytics engine (as black box to collect data from Mariadb, SQL Server, and Cassandra ) to build reports and dashboards. I am testing the logstash and jdbc input plugin to ship data from sqlserver and mariadb into elasticsearch. I deployed the ES server on the Elastic Cloud (our plan to use Elastic Cloud to build our ES cluster) and then i installed logstash on my local PC and also on Rancher to test both: the logstash on my PC, and on ubnutu over Rancher to ship data by using logstash. I used following sqlserver.conf:

 input {
jdbc {
jdbc_connection_string = "jdbc:sqlserver://HOST_IP:1433;databaseName=mydatabase;user=milad;password=password;"
   jdbc_driver_library ="C:\Users\milad\Downloads\sqljdbc_4.2.8112.200_enu.tar\sqljdbc_4.2.8112.200_enu\sqljdbc_4.2\enu\jre8\sqljdbc42.jar"
 jdbc_driver_class = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
 jdbc_user = "milad"
   jdbc_password ="password"
  statement = "SELECT * FROM dbo.CRM_Booking"
}
   }
output {
elasticsearch {hosts = ["https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243"]
   index ="tbl_CRM_Booking"
   document_id ="%{BookingId}"
user = elastic
   password => y...PassWorD....
   }
   }

I used this command to run the logstash on my PC and try to ingest ES:

PS C:\ProgramData\Elastic\logstash-6.2.4\bin>  .\logstash -f  C:\ProgramData\Elastic\logstash-6.2.4\config\sqlserver.conf

but I am getting following error logs:

Sending Logstash's logs to C:/ProgramData/Elastic/logstash-6.2.4/logs which is now configured via log4j2.properties
[2019-02-15T13:05:55,063][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/ProgramData/Elastic/logstash-6.2.4/modules/fb_apache/configuration"}
[2019-02-15T13:05:55,099][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/ProgramData/Elastic/logstash-6.2.4/modules/netflow/configuration"}
[2019-02-15T13:05:55,265][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-02-15T13:05:55,707][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2019-02-15T13:05:56,176][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-02-15T13:05:59,647][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-02-15T13:06:00,114][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/]}}
[2019-02-15T13:06:00,130][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/, :path=>"/"}
[2019-02-15T13:06:00,838][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/"}
[2019-02-15T13:06:01,123][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-02-15T13:06:01,131][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-02-15T13:06:01,162][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-02-15T13:06:01,209][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-02-15T13:06:01,446][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243"]}
[2019-02-15T13:06:02,163][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x588770bd run>"}
[2019-02-15T13:06:02,242][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2019-02-15T13:06:03,459][INFO ][logstash.inputs.jdbc     ] (0.029523s) SELECT * FROM dbo.CRM_Booking
[2019-02-15T13:06:04,182][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x588770bd run>"}   

Also i am getting almost same error message from Logstash on Rancher.

Notes: I can curl the https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243 from my PC and I could create an index just to test.

Any help with this issue. I checked other similar questions but nothing help.

change your path to

jdbc_driver_library ="C:/Users/milad/Downloads/sqljdbc_4.2.8112.200_enu.tar/sqljdbc_4.2.8112.200_enu/sqljdbc_4.2/enu/jre8/sqljdbc42.jar"

Hope this helps you.

Regards

Thanks for your help.

Now i am getting this log:

[2019-02-18T09:59:42,497][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "" hasn't been registered>, :backtrace=>["C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/settings.rb:37:inget_setting'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/settings.rb:70:in set_value'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/settings.rb:89:inblock in merge'", "org/jruby/RubyHash.java:1343:in each'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/settings.rb:89:inmerge'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/settings.rb:138:in validate_all'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/runner.rb:264:inexecute'", "C:/ProgramData/Elastic/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in run'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/runner.rb:219:inrun'", "C:/ProgramData/Elastic/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in run'", "C:\\ProgramData\\Elastic\\logstash-6.2.4\\lib\\bootstrap\\environment.rb:67:in'"]}`

use like this,

keep your sqlserver.conf file in logstash-6.2.4 folder and try to run as shown below.

PS C:\ProgramData\Elastic\logstash-6.2.4>  bin\logstash -f  sqlserver.conf

Also, my second point is:

C:/Users/milad/Downloads/sqljdbc_4.2.8112.200_enu.tar/sqljdbc_4.2.8112.200_enu/sqljdbc_4.2/enu/jre8/sqljdbc42.jar

how are you reading the jar file from tar file, its not possible. Extract the jar, and give proper location to the path.

Sample Ex:

input {

jdbc {
jdbc_driver_library = "C:/Users/xxxx/Downloads/sqljdbc4-2.0.jar"
jdbc_driver_class = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string = "jdbc:sqlserver://localhost:1433;user=qqqq;password=vvvv@123;database=QWEQWE"
jdbc_user = "qqqq"
jdbc_password = "vvvv@123"
statement = "SELECT TOP 1000 * FROM SAMPLE"
jdbc_paging_enabled = "true"
jdbc_page_size = "50000"
}

}
 output{
	 
	   elasticsearch { codec => json hosts => ["localhost:9200"] index => "index_name" document_type => "_doc"  } 
       stdout { codec => rubydebug }
}

I extracted it as you said now it is:
C:\Users\milad\Downloads\sqljdbc_4.2.8112.200_enu\sqljdbc_4.2\enu\jre8\sqljdbc42.jar

now the input is:
input {
jdbc {
jdbc_driver_library => "C:/Users/milad/Downloads/sqljdbc_4.2.8112.200_enu/sqljdbc_4.2/enu/jre8/sqljdbc42.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
....

and now this error:

[2019-02-18T11:55:32,032][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-02-18T11:55:32,291][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 9, column 22 (byte 460) after input {\n  jdbc {\n    jdbc_driver_library => \"C:/Users/milad/Downloads/sqljdbc_4.2.8112.200_enu/sqljdbc_4.2/enu/jre8/sqljdbc42.jar\"\n\tjdbc_driver_class => \"com.microsoft.sqlserver.jdbc.SQLServerDriver\"\n    jdbc_connection_string => \"jdbc:sqlserver://HOST_IP:1433;databaseName=mydatabase;user=milad;password=password\"\n    jdbc_user => \"milad\"\n    jdbc_password => \"password\"\n    statement => \"SELECT * FROM dbo.CRM_Booking\"\n\tjdbc_paging_enabled ", :backtrace=>["C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:51:in `initialize'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:169:in `initialize'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:315:in `block in converge_state'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:312:in `block in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:299:in `converge_state'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:166:in `block in converge_state_and_update'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:164:in `converge_state_and_update'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/agent.rb:90:in `execute'", "C:/ProgramData/Elastic/logstash-6.2.4/logstash-core/lib/logstash/runner.rb:348:in `block in execute'", "C:/ProgramData/Elastic/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}

Please use correct query as shown below like,

statement = "select BookingId,BookingName,......... from CRM_Booking"

First try to check with localhost, then you can load in server "ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io", as shown in above example

Regards

@balumurari1 thanks for your help,
select */all wasn't the main problem. But when i read your comment i double check if i have any rows in that table which we don't have any data there. Then when i changed to other table that we have data there and the elasticsearch output as localhost:9200 it works without any issue. Then when i replaced the localhost:9200 with the real Elasticsearch host IP which is on Elastic Cloud i got error: Got response code '401' contacting Elasticsearch

[2019-02-19T13:40:01,611][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/'"}
[2019-02-19T13:40:03,231][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>2, "stalling_thread_info"=>{"other"=>[{"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}, {"thread_id"=>32, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}

So I think it is about authentication and connecting to the Elasticsearch instance on Elastic Cloud. So strange in my config file i have the username and password.

And i tested to ping and browse the url: https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243 i enter username and password then i got the response. So how i can fix it?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.