in our current project we need to store fields in an index with numbers, that are bigger then a long value.
Is this somehow possible? Currently i get the following error:
Error encountered during bulk load [MapperParsingException[failed to parse [fieldName]]; nested: JsonParseException[Numeric value (18446744073709551612) out of range of long (-9223372036854775808 - 9223372036854775807)
double is the best you can do and still use the number functions. But you get the corresponding loss of precision. You can store it as a string if you like too.
If you genuinely need more than that I'd file an issue. I thought there was already an issue mentioning arbitrary precision integers but I can't find it now.
You can store this documents in two different types(long as string, long). First insert documents without exceptions, and after that with exceptions. For example this is available in NEST .Net Bulk insert. Bulk response will return docs with exceptions.
What do you mean with this? That i need to insert a document twice? Or do you want to say that inserting such a big number as normal long (not long as string), allways results in an exception?
Are you talking about a document, when it is inserted in a schema less mode or with a strict schema?
In our system we have a string schema, with data type long for this field.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.