Bulk adding new field

Hi,

I am trying to add new field to the existing documents via bulk. currently am struggling with below scenario.

Mapping:
"fourthField": {
"type": "string",
"index": "not_analyzed"
}

Script:
{ "script" : "ctx._source.fourthField ="1234567890""}

Memory GC warning is logged when I am trying to add "fourthField" with value like above to the existing document via bulk (5000 records at a time) and also read timed out exception is thrown from the client.
It works fine when I am adding with 3 numeric value "123" to "fourthField". Do we have limitation for field has attribute index as 'not_analyzed'?

Environment:
Elasticsearch 2.3.1
Heap size: 8g (increased from default value)

Thanks.

There is no limitation. In my opinion, the GC message you are seeing is mostly due to the fact that you are running an extremely expensive operation: since Lucene does not support in-place updates, this is essentially rewriting the index.