I am using java REST client of Elasticsearch. I have a requirement where i need to find the max value of a long field from all the documents in an Index.
I tried using big decimal, big integer and long. I am loosing precision in every case.
Also, after I create a new document with the next number, when I run the max id query after that, I get the old max value again.
ie. I run the max query in my first post I get 1805130000005357060. Now I create a new document with MYid as 1805130000005357061. Now when I run the max query again, I get 1805130000005357060. This is perplexing, not sure if this has anything to do with precision loss. This is happening when I test with POSTMAN also, So i guess it may not be an issue with the REST Client. But now the issue is how do I find the max value of the long field from all my documents?
@Paddy_Mahadeva sorry for the long delay, your original analysis that this is not only an issue with the REST client is right. The problem is the max-aggregation: all aggregations internally use a double representation because it makes sense for things like sums etc... but for min/max and for very large values this can come at the cost of precision. Here is an issue that discusses this problem.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.