Hi, so i've been trying to find some documentation on the JDBC Input Plugin. Since it seems like this is the ideal route taking in what im trying to do:
Essentially, I have a table in a Postgres Database that im wanting to display on a Simple Kibana Visualization that can be searched via ElasticSearch. This table basically contains weather sensor data (Temp/Sensor ID/etc...)
From everything i've searched it seems like the JDBC input plugin is the right way to go (hopefully, correct me if I am wrong ha!)
I am not a developer by profession, but actually a QA and completely new to the ELK stack. Im going by the instructions here: https://github.com/logstash-plugins/logstash-input-jdbc
To clone the directory (which is taking a little tinkering to get bundler/jvm up) (Is there a simpler way besides building it yourself?). Anyways, once that is done it seems like i'd need to make a configuration file such as this one:
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/mydb"
jdbc_user => "postgres"
jdbc_validate_connection => true
jdbc_driver_library => "/path/to/postgresql-9.4-1201.jdbc41.jar"
jdbc_driver_class => "org.postgresql.Driver"
statement => "SELECT * from sensors"
}
}
output {
elasticsearch {
protocol => http
index => "sensors"
document_type => "sensor"
document_id => "%{uid}"
host => "ES_NODE_HOST"
}
}
However, im not really sure where this configuration file would go...and im honestly really a little bit lost on the "next steps" after this. Im really looking for the most basic setup to just be able to read from one table and be search it (I suppose indexing it like the example above allows it to update search whenever the table changes?). Im a little bit confused at how thats different than something like the MusicBrainz demo on here: https://www.elastic.co/blog/logstash-jdbc-input-plugin