If you want to copy MS SQL data to elasticsearch index you should write these codes to config file of logstash
What I did? :
1-first create new config file (because we do not want to change our default config file for 1 operation)
2- Write codes below
3- Give a name for config file, I gave jdbc.config
code :
input {
jdbc {
jdbc_connection_string =>"jdbc:sqlserver://LOCALHOST;databaseName=DATABASE;user=user_id;password=user_password;integratedSecurity=false;"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_driver_library => "C:\Users\user\Desktop\sqljdbc_4.2\enu\jre8\sqljdbc42.jar"
jdbc_user => "user_id"
jdbc_password =>"user_password"
statement => "SELECT * FROM YOUR_DATABASE"
}
}
output {
elasticsearch {
hosts => ["192.xx.xx:9200"] YOUR ELASTIC HOST YOU CAN CHANGE WITH LOCALHOST
user => "YOUR ELASTIC USERNAME IF YOU HAVE "
password => "YOUR ELASTIC PASSWORD IF YOU HAVE"
index => "sql_extended"
}
}
You should download jdk 8 and sqljdbc4.2 and copy their paths to environment path (if you dont know how to do it u can search on google u will find easily)
i hope this help if you have question, u can ask me.
BTW : You should run logstash with your new config file , do not forget.
it is better to use java jdk8 and jdbc 4.2 at least i tried
BTW : if you fix your version problem u should check your jvm.config before run logstash because your
-Xms1g
-Xmx1g
settings will use default value. these means that minimum memory and maximum memory it is better to use with same setting. If you have big data u should increase these settings like
-Xms4g
-Xmx4g
(it is better to use half of your memory) these means that use minimum 4 gb ram and maximum 4 gb ram. If you use with 1g you might get java heap size error when you copy your database. I hope this will help you
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.