Currently, my Logstash application reads data from a Kafka stream.
I would like to add data from a different source which is updated once a week.
To do so, I would like to have an external (Ruby or Java?) plugin which is able to load data from the new source into memory on a daily basis. (I would like to avoid querying the database everytime a Kafka message is received).
Once loaded into memory, I plan on taking adding some new fields into ES which are not obtainable from Kafka.
Is it possible to create a Logstash in-memory plugin which does what I describe above?
If not, is there a workaround?
Thanks! The jdbc_static plugin looks promising.
In my case, this is not a direct connection a DB, but rather, an API is wrapped around the DB which accepts SQL-like queries. The value returned by the API to me is data represented by a Thrift object.
So, I would need to be able 1) to query the API and 2) decode the Thrift objects with the jdbc_static plugin. Do you think this is possible?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.