We have a project to use Elasticsearch as our analytics engine (as black box to collect data from Mariadb, SQL Server, and Cassandra ) to build reports and dashboards. I am testing the logstash and jdbc input plugin to ship data from sqlserver and mariadb into elasticsearch. I deployed the ES server on the Elastic Cloud (our plan to use Elastic Cloud to build our ES cluster) and then i installed logstash on my local PC and also on Rancher to test both: the logstash on my PC, and on ubnutu over Rancher to ship data by using logstash. I used following sqlserver.conf:
input {
jdbc {
jdbc_connection_string = "jdbc:sqlserver://HOST_IP:1433;databaseName=mydatabase;user=milad;password=password;"
jdbc_driver_library ="C:\Users\milad\Downloads\sqljdbc_4.2.8112.200_enu.tar\sqljdbc_4.2.8112.200_enu\sqljdbc_4.2\enu\jre8\sqljdbc42.jar"
jdbc_driver_class = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_user = "milad"
jdbc_password ="password"
statement = "SELECT * FROM dbo.CRM_Booking"
}
}
output {
elasticsearch {hosts = ["https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243"]
index ="tbl_CRM_Booking"
document_id ="%{BookingId}"
user = elastic
password => y...PassWorD....
}
}
I used this command to run the logstash on my PC and try to ingest ES:
PS C:\ProgramData\Elastic\logstash-6.2.4\bin> .\logstash -f C:\ProgramData\Elastic\logstash-6.2.4\config\sqlserver.conf
but I am getting following error logs:
Sending Logstash's logs to C:/ProgramData/Elastic/logstash-6.2.4/logs which is now configured via log4j2.properties
[2019-02-15T13:05:55,063][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/ProgramData/Elastic/logstash-6.2.4/modules/fb_apache/configuration"}
[2019-02-15T13:05:55,099][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/ProgramData/Elastic/logstash-6.2.4/modules/netflow/configuration"}
[2019-02-15T13:05:55,265][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-02-15T13:05:55,707][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2019-02-15T13:05:56,176][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-02-15T13:05:59,647][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-02-15T13:06:00,114][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/]}}
[2019-02-15T13:06:00,130][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/, :path=>"/"}
[2019-02-15T13:06:00,838][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243/"}
[2019-02-15T13:06:01,123][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-02-15T13:06:01,131][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-02-15T13:06:01,162][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-02-15T13:06:01,209][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-02-15T13:06:01,446][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243"]}
[2019-02-15T13:06:02,163][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x588770bd run>"}
[2019-02-15T13:06:02,242][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2019-02-15T13:06:03,459][INFO ][logstash.inputs.jdbc ] (0.029523s) SELECT * FROM dbo.CRM_Booking
[2019-02-15T13:06:04,182][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x588770bd run>"}
Also i am getting almost same error message from Logstash on Rancher.
Notes: I can curl the https://elastic:xxxxxx@ccccca988e19691429140ttttuuuu.eu-west-1.aws.found.io:9243 from my PC and I could create an index just to test.
Any help with this issue. I checked other similar questions but nothing help.