BIG_INTEGER issues in nested documents during reindex

Working in V. 2.4.3

I have to reindex our data, and am running into an issue with Integer string parsing (specifically parsing strings as integers)

The Error
{"error":{"root_cause":[{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}],"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"},"status":500}

My script so far:
if(ctx._source.field_one) {for (s in ctx._source.field_one.values) {if(s.ref_data_1) {s.ref_data_1.value = s.ref_data_1.value.toString();}; if(s.ref_data_4) {s.ref_data_4.value = s.ref_data_4.value.toString();};};};

the offending data seems to be a string that has a numerical value 19 characters long -- So it makes sense that the error is occuring, based on previous posts (No matching token for number_type [BIG_INTEGER] in particular). Is there a way, via a script while reindexing, to correctly move over from the previous to the new index with these values? Is there any way to avoid the step that tries to parse the value as an Int?

To muddy the waters I'm trying to do this for a list of nested documents, if those nested documents exist. I'm not sure I'm scripting it the correct way to begin with. Help on either front would be great!

We have found two solutions to this issue. One is simply to remove the script associated with the reindex. doing so will allow the whole reindex to happen without issue.

We do have other things that we need to do in script, though, and have been able to find a work around in elasticsearch-reindex, the older node library that allows you do do reindexing on a remote server, and run dynamic node against the records as they flow through the remote server.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.