Error during Re-Indexing, chunk size?

I am using v2.1 and using the python re-indexing helper.
The source index has 500491 documents and a size of 150.4gb
average document about 300kb.

So I chunked the bulk insert to 50 document, set 'refresh_interval' to -1 and let it go.

After 490212 documents I received this error:

Traceback (most recent call last):
File "reindex_cps_docs.py", line 33, in
chunk_size=50
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 353, in reindex
chunk_size=chunk_size, **kwargs)
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 188, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 159, in streaming_bulk
for bulk_actions in chunk_actions(actions, chunk_size, max_chunk_bytes, client.transport.serializer):
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init
.py", line 53, in chunk_actions
for action, data in actions:
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init
.py", line 342, in change_doc_index
for h in hits:
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init
.py", line 283, in scan
resp = client.scroll(scroll_id, scroll=scroll)
File "c:\Python27\lib\site-packages\elasticsearch\client\utils.py", line 69, in wrapped
return func(*args, params=params, **kwargs)
File "c:\Python27\lib\site-packages\elasticsearch\client_init
.py", line 662, in scroll
params=params, body=body)
File "c:\Python27\lib\site-packages\elasticsearch\transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "c:\Python27\lib\site-packages\elasticsearch\connection\http_urllib3.py", line 93, in perform_request
self._raise_error(response.status, raw_data)
File "c:\Python27\lib\site-packages\elasticsearch\connection\base.py", line 105, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.TransportError: TransportError(503, u'{"_scroll_id":"c2NhbjswOzE7dG90YWxfaGl0czoyOTMyMjk7","too
k":1,"timed_out":false,"_shards":{"total":5,"successful":0,"failed":0},"hits":{"total":293229,"max_score":0.0,"hits":[]}
}')

I set:
timeout=30,
max_retries=10,
retry_on_timeout=True

I can lower the chunk size. What else can I try?

I raised the Scan timeout to 10m and after about 3 hours the
retry gives the following error:

Traceback (most recent call last):
File "reindex_cps_docs.py", line 34, in
scroll='10m'
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 353, in reindex
chunk_size=chunk_size, **kwargs)
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 188, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init_.py", line 160, in streaming_bulk
for result in process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs):
File "c:\Python27\lib\site-packages\elasticsearch\helpers_init
.py", line 89, in _process_bulk_chunk
raise e
elasticsearch.exceptions.TransportError: TransportError(503, u'cluster_block_exception')