Is jdbc river is going to expire? If yes then what is the alternative to connect the database


(Ritesh) #1

Is jdbc river is going to expire? If yes then what is the alternative to connect the database

I want to connect to database like Oracle and fetch query from table, if jdbc river is going to expire, whats the alternative?


How do i import data from mysql to elastic search 1.6
(Jörg Prante) #2

JDBC river for ES 1.5 and higher will be followed by a standalone JVM solution, a JDBC feeder, which uses transport client, with only small changes to JSON river specification syntax.

Existing river versions of JDBC river will be supported until ES versions 1.3 and 1.4 will reach end of updates.

I will publish a document how to move to the standalone JDBC feeder so transition will be smooth.

If you want to have a glimpse at the upcoming source code, look at the "noriver" branch at https://github.com/jprante/elasticsearch-jdbc/tree/noriver (the README is still work in progress)


How could I build jdbc input plugin?
(Mark Walkom) #3

We are also working on a JDBC input plugin for Logstash - https://github.com/logstash-plugins/logstash-input-jdbc


(Ritesh) #4

Hi jprante

Thanks for your response. I am using ES 1.5 and latest jdbc river installed from xbib.

If i have understood correctly, the feeder architecture will be supported and not the river architecture?


(Jörg Prante) #5

The announcement of river deprecation was "Rivers are deprecated from 1.5 moving forward" https://www.elastic.co/blog/deprecating_rivers so I keep JDBC river in sync with that.

It means, if you update to ES 1.5 with an old project, you will be able to continue to use JDBC river. If you start a new project, I do not recommend to setup any solution with rivers.


(Jim McKibben) #6

One, I have rather loved the jdbc-river and found how to CURL and was even developing a PHP CURL implementation for maintaining the rivers. Thank you Jorg for your work!

Two, most all of our (the company I work for) data is in a group of databases and as near as I've been able to figure, doing specific queries for the data will allow us to feed/seed Elasticsearch in a very clean and proper method (further, use old data and look for patterns).

How would you suggest someone work with an external database (MySQL for example) and get specific data outputs into Elasticsearch?

I just got the lastest version of all Logstash/Elasticsearch/Kibana installed and serving up pages and was about to configure the river.


(system) #7