JDBC River - outside valid range for the datatype INTEGER


(Justin Doles) #1

I'm struggling to figure out what I'm missing here. I'm new to both ES and
the JDBC river plugin. I've been able to create basic rivers from MySQL
with no issue. But when I get to a relatively large table (~74 million
rows), I get hung up. Below is the error I get:

java.io.IOException: com.mysql.jdbc.exceptions.jdbc4.MySQLDataException:
'3.221880888E9' in column '3' is outside valid range for the datatype
INTEGER.

That column in MySQL is INT and I've mapped it to long in ES due to it's
size. Even without me defining the mapping, it errors out. The actual
value it's getting hung up on is 3221880888. My instinct tells me this is
a bitness problem (32 vs 64), but I don't know where else to look. Here's
my environment:

Windows 2008 R2 Standard (x64)
MySQL 5.5.33 (x64)
ES 0.90.5
JDBC River
MySQL Connector J 5.1.27

Any help would be greatly appreciated.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jörg Prante) #2

In the JDBC river, the integer mapping is broken (SQL uses unsigned
integer, Java JDK < 8 is using signed integer data type).

I have added an issue

Thanks for reporting,

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Justin Doles) #3

Well glad it's not just me. I should add that I specified unsigned INT
type in MySQL. The default is signed which works. As a workaround, I
imagine I could choose BIGINT (signed)? Not sure that will work since it's
twice the storage in MySQL. Do you know when this might get addressed?

Thanks for the response and the plugin.

-Justin

On Wednesday, November 20, 2013 4:14:59 PM UTC-5, Jörg Prante wrote:

In the JDBC river, the integer mapping is broken (SQL uses unsigned
integer, Java JDK < 8 is using signed integer data type).

I have added an issue
https://github.com/jprante/elasticsearch-river-jdbc/issues/136

Thanks for reporting,

Jörg

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jörg Prante) #4

BIGINT in JDBC is signed 64bit integer and will be converted to a JSON
"long".

Note, you can compress table data in MySQL if storage memory is your
concern.

In a day or two I can commit the change and create a fix version so that
unsigned integers are checked against their boundaries and do not break.

Jörg

On Thu, Nov 21, 2013 at 12:20 AM, Justin Doles jmdoles@gmail.com wrote:

Well glad it's not just me. I should add that I specified unsigned INT
type in MySQL. The default is signed which works. As a workaround, I
imagine I could choose BIGINT (signed)? Not sure that will work since it's
twice the storage in MySQL. Do you know when this might get addressed?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Justin Doles) #5

Excellent! I'll keep an eye out.

I do use row compression (InnoDB) which yields ~60% savings on most tables.
I just have a massive amount of data that goes into these tables. My
smallest, compressed set is 80GB. :wink:

Justin

On Thursday, November 21, 2013 3:11:33 AM UTC-5, Jörg Prante wrote:

BIGINT in JDBC is signed 64bit integer and will be converted to a JSON
"long".

Note, you can compress table data in MySQL if storage memory is your
concern.

In a day or two I can commit the change and create a fix version so that
unsigned integers are checked against their boundaries and do not break.

Jörg

On Thu, Nov 21, 2013 at 12:20 AM, Justin Doles <jmd...@gmail.com<javascript:>

wrote:

Well glad it's not just me. I should add that I specified unsigned INT
type in MySQL. The default is signed which works. As a workaround, I
imagine I could choose BIGINT (signed)? Not sure that will work since it's
twice the storage in MySQL. Do you know when this might get addressed?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Justin Doles) #6

Works great with your latest commit + ES 0.90.7. Thanks again.

On Thursday, November 21, 2013 3:11:33 AM UTC-5, Jörg Prante wrote:

BIGINT in JDBC is signed 64bit integer and will be converted to a JSON
"long".

Note, you can compress table data in MySQL if storage memory is your
concern.

In a day or two I can commit the change and create a fix version so that
unsigned integers are checked against their boundaries and do not break.

Jörg

On Thu, Nov 21, 2013 at 12:20 AM, Justin Doles <jmd...@gmail.com<javascript:>

wrote:

Well glad it's not just me. I should add that I specified unsigned INT
type in MySQL. The default is signed which works. As a workaround, I
imagine I could choose BIGINT (signed)? Not sure that will work since it's
twice the storage in MySQL. Do you know when this might get addressed?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #7