Painless Elasticsearch update do not handle big numbers

Hello,

I'm using an index that contains big numbers (tracking data byte count per IP). Each time I see an IP in the log I extract the size of the request and add it to an index with the IP as a key. The index has a mapping that say that this "size" number is a long and should go to 2^63. The problem is that I do updates with a painless procedure and it doesn't work. It looks like the painless script only uses integer (2^32).

How to reproduce:

Store the update script, create the template to play with LONG numbers and destroy any existing index so it is recreated with the template next time.

POST /_scripts/eci_test1_script
{
  "script": {
    "lang": "painless",
    "source": "ctx._source.bignumber += params['bignumberincrement']"
  }
}

PUT /_index_template/eci_test1_template
{
  "index_patterns": [
    "eci_test1"
  ],
  "template" : {
    "mappings": {
      "_doc": {
        "properties" : {
            "bignumber": {
              "type": "long"
            }
          }
        }
      }
    }
  }
}

DELETE /eci_test1

Now post the upsert and check for the value. Do it three times.

POST eci_test1/_update/mydoc1
{
  "script" : {
    "id" : "eci_test1_script",
    "params" : {
      "bignumberincrement" : 1000000000
    }
  },
  "upsert" : {
    "bignumber" : 1000000000
  }
}

GET /eci_test1/_doc/mydoc1

What I have:

{
  "_index" : "eci_test1",
  "_type" : "_doc",
  "_id" : "mydoc1",
  "_version" : 3,
  "_seq_no" : 2,
  "_primary_term" : 1,
  "found" : true,
  "_source" : {
    "bignumber" : -1294967296
  }
}

bignumber should be 3000000000.

Found a workaround. Do not use the increment operations.

Instead of

# BUGGY with LONG
ctx._source.bignumber += (long) params['bignumberincrement']

Use

# WORKS
ctx._source.bignumber = (long) ctx._source.bignumber + (long) ctx._source.bignumber params['bignumberincrement']

Hi @DidierB. What's happening is the _source is being parsed from json without knowledge of the mappings, so it is defaulting to int.

You're right, the work around is to force ctx._source.bignumber to be a long on every update by assigning it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.