I found and read about jdbc_static and trying to get my head around.
But I have some questions before i dive deeper into this one.
Say that I have a database with a table with rows that contains i.e a identifier, database-server and database-name, then i use jdbc_static-filter and load that data into memory. Would it be possible to iterate that data and build another connectionstring that is used with the jdbc-input to fetch other data with given sql-query?
The connection string for a jdbc input or jdbc_streaming filter has to be set when the pipeline is started. It cannot reference fields on an event since no events exist at that point. It can reference an environment variable.
So you would have to start a new instance of logstash for each connection string you wanted to use, which would be ridiculously expensive.
Also you cannot iterate over the data in a jdbc_static filter, you can just do lookups against it.
If you have a DB table that lists the databases you need to run a query against then I would start with ksh and an SQLtool.
Yep. I thought about something similiar first but then I read about this jdbc_static and what it could so I thought i would be able to get rid of extra tools. I'm happy I asked
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.