Incorrect sum while aggregating in elasticsearch for one particular index

Hi,

@colings86

I am currently trying to aggregate a field in elasticsearch, on checking the sum. I think it's not giving me the correct answer.

Following is my elasticsearch query:

{
    "size": 0,
    "query": {
        "filtered": {
            "query": {
                "query_string": {
                    "query": "*"
                }
            },
            "filter": {
                "and": [
            {
                "range": {
                    "start_timestamp": {
                        "from": start_date,
                        "to": end_date
                    }
                }
            },
            {
                "term": { "id" : ad_id }
            }
    
                ]
            }
        }
    },
    "aggs": {
        "type1": {
            "terms": {
                "field": "type_a",
                "size": 0,
                "order": {
                    "revenue": "desc"
                }
            },
                "revenue": {
                    "sum": {
                        "field": "revenue"
                    }
                }
            }
        }
}

So when i am doing the same query for other indexes, it gives me the correct sum but for one it's exceedingly high and incorrect. I tried checking by downloading all fields and summing them up in python and its giving me the correct number which leds me to believe that it might be related to my query ? I checked the mapping for the field "revenue" and it's "double".

Is it some kind of overflow problem?

Thanks,
Aditya

Can you give an example of a correct response (from one of the other indices) and an example of the erroneous response too? Also can you show the mappings for the index which is producing the errors.

I suspect you are hitting the same issue as described in this post: Sum Aggregation returning very small, unrelated values

If the responses and mapping are long then please paste them into a gist and link that here rather than filling this topic will large amounts of json.

Hi @colings86

Thanks for the quick reply:

For the particular field the mapping is:

      "revenue" : {
        "type" : "double"
      },

The Incorrect response is something like this : 4676552119224959000

whereas a correct response is usually: 250000

If you want i can paste the complete mapping in the gist link

Is this is being caused by sometimes the index, field is updated by "long" and sometimes it's updated as "double"? Shouldn't this happen with all indexes , since i am using the same upload script to update all my indices.

Will i have to reindex my data?

Thanks again.

It looks like you are running into the issue I linked above. This happens only rarely (hence why you only see it on one of your indices) when two shards dynamically map the same field as different types at the same time (one shard may see a double value and map the field to a double whilst the other sees a long value and maps the field to a long). This is a known bug in 1.x and will be fixed in the upcoming 2.0 release (the beta for this release is available now but DO NOT use this in production). To work around this bug you will need to re-index your data into an index with explicit mappings for your fields (especially your numeric fields).

HI @colings86

Thanks for the explanation

i just checked the mapping on my other node, even the other node contains the same mapping though, do you still think it's due to incorrect mapping types.

For reindexing, can i just query for whole data -> mention explicitly the mapping -> insert data into new index -> delete the old index . This whole process i will be doing using python.

Is there a better way to go about it.

Thanks,
Aditya

Yes. The mapping will appear correct but one or more shards will actually be using an incorrect mapping for that field.

Yes, I believe the python client has a reindex API so I would use that.