I'm using Logstash 2.4.1 to load data to Elasticsearch 2.4.6.
I have the following Logstash config:
input {
jdbc {
jdbc_connection_string => "jdbc:oracle:thin:@database:1521:db1"
jdbc_user => "user"
jdbc_password => "password"
jdbc_driver_library => "ojdbc6-11.2.0.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
parameters => { "id" => 1 }
statement => "SELECT modify_date, userName from user where id = :id AND modify_date >= :sql_last_value"
schedule => "*/1 * * * *"
tracking_column => modify_date
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "index1"
document_type => "USER"
}
stdout { codec => rubydebug }
}
So, for each minute, it goes to the database to check if there is new data for Elastic.
It works perfectly, but there is one problem:
We have around 100 clients, and they are all in the same database instance.
That means I have 100 scripts and will have 100 instances of Logstash running, meaning 100 open connections:
nohup ./logstash -f client-1.conf Logstash startup
nohup ./logstash -f client-2.conf Logstash startup
nohup ./logstash -f client-3.conf Logstash startup
nohup ./logstash -f client-4.conf Logstash startup
nohup ./logstash -f client-5.conf Logstash startup
and so on...
This is just bad.
Is there any way I can use the same connection for all my scripts ?
The only difference between all those scripts is the parameter id and the index name, each client will have a diferent id and a different index:
parameters => { "id" => 1 }
index => "index1"
Any ideas ?