This is what I've understood so far, JDBC plugin in Feeder mode is run as a
bash script with parameters similar to river. The documentation says that
it is a push model. Can anyone explain how does it work? If i have a new
data pushed into my db, what role does the feeder play from here on?
The "push model" works like this: a standalone JVM is running the JDBC
plugin, this plugin connects to an Elasticsearch cluster using the
TransportClient. Then, SQL statement(s) are executed, the rows are
processed, and indexed into Elasticsearch with the bulk processor.
Because the nodes in the cluster are not "pulling" data from an external
source into the cluster JVMs like river instances do, the standalone JDBC
plugin JVM is what I call "push" model.
The JDBC plugin does not recognize if there is new data in the DB.
This is what I've understood so far, JDBC plugin in Feeder mode is run as
a bash script with parameters similar to river. The documentation says that
it is a push model. Can anyone explain how does it work? If i have a new
data pushed into my db, what role does the feeder play from here on?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.