Oracle JDBC River Configuration Issue

Hi, I've been playing about with ES 1.4.4 and Logstash.. I've managed to setup a logstash filter to read a csv file containing the results of a query from Oracle.. Now I've got what I want, I would like to automate this process and run further query's against Oracle and pull the data into ES..

I have followed some instructions in regards to setting up a RIVER, I have the following jar file in my plugins/jdbc folder 'elasticsearch-river-jdbc-1.4.0.10.jar'. - restarted ES and can see the driver loaded

loaded [jdbc-1.4.0.10-87c9ce0]

When trying to CURL the following, I get an error and was wondering what this meant..

curl -XPUT 'localhost:9200/_river/my_oracle_river/_meta' -d '
{
"type" : "jdbc", 
"jdbc" : {
"url" : jdbc:oracle:owner@//instance:5123/test",
"user": "OWNER",
"password" : "",	
"sql" : "select * from users"
}
}'

I receive the following error:

{"error":"MapperParsingException[failed to parse]; nested: JsonParseException[Unrecognized token 'jdbc': was expecting ('true', 'false' or 'null')\n at [Source: [B@30fe5cf7; line: 1, column: 43]]; ","status":400}

I hear that RIVER support is to end... So what's the alternative to RIVERS for connecting to an ORACLE instance - Any advice/help appreciated.

Thanks.
Shaun.

This is not a river issue. It seems there are syntax problems related to your command line interface executing the curl command.

Hi,

You are missing a " before "jdbc...."
i use to have "driver": "oracle.jdbc.OracleDriver" as a parameter as well but not sure if it's required

I'build my own 'river' which is basically a program that reads data from a database and bulkindexes that data into elasticsearch. Good practice for getting to know elasticsearch bit more.
i used this resource amongst some other ones: http://www.programcreek.com/java-api-examples/index.php?api=org.elasticsearch.action.bulk.BulkRequestBuilder

Cheers,
Maarten