I have an oracle table where most of the number fields are BigDecimal. I
want to migrate all the data from the table to elasticsearch. If I don't
provide any mapping, elasticsearch is taking the bigdecimal values as
double and the precision is lost. I was wondering if elasticsearch
provides the support for BigDecimal field type. My understanding is that it
doesn't support( http://www.elasticsearch.org/guide/reference/mapping/core-types/). Right
now , as a workaround, I have used an explicit mapping where I mapped all
the BigDecimal fields to String. My question is how do I efficiently index
these bigdecimal fields, because I believe that using a string for
bigdecimal has performance issues.
What kind of performance issues do you mean? If you want fast floating
points, do not use BigDecimal.
BigDecimal has to be converted to double or formatted to string anyway,
because there is no JSON big decimal support. You can parameterize the
conversion by scale und rounding mode (default is the scale in big
decimal value and rounding mode HALF_UP)
Jörg
Am 15.07.13 08:02, schrieb sameer mohammed:
I was wondering if elasticsearch provides the support for BigDecimal
field type.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.