Need help with Reindexing records in Elasticsearch 1.71

Hello, I am using elasticsearch 1.7 and have a 3 node cluster. All three nodes are data nodes and master eligible nodes. With minimum master nodes set to 2. I have an index_v1 which holds around 500k records. I want to reindex around 200k of those records to new index_v2. For that I am using the following plugin https://github.com/codelibs/elasticsearch-reindexing .
However whenever we I try to use the plugin it fails and I get the following errors in the logs

[2016-08-02 05:25:13,931][WARN ][action.bulk ] [NodeName-Node1] unexpected error during the primary phase for action [indices:data/write/bulk[s]]
org.codelibs.elasticsearch.reindex.exception.ReindexingException: failure in bulk execution:
[0]: index [index_v2], type [metadata], id [18229489], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[1]: index [index_v2], type [metadata], id [18229500], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[2]: index [index_v2], type [metadata], id [18229533], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[3]: index [index_v2], type [metadata], id [18229496], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[4]: index [index_v2], type [metadata], id [18229483], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[5]: index [index_v2], type [metadata], id [18229519], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[6]: index [index_v2], type [metadata], id [18229490], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[5032]: index [index_v2], type [metadata], id [18225288], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
[5033]: index [index_v2], type [metadata], id [18226292], message [SendRequestTransportException[[NodeName-Node2][inet[/Node2IP:9300]][indices:data/write/bulk[s]]]; nested: NodeNotConnectedException[[NodeName-Node2][inet[/Node2IP:9300]] Node not connected]; ]
at org.codelibs.elasticsearch.reindex.service.ReindexingService$ReindexingListener$2.onResponse(ReindexingService.java:215)
at org.codelibs.elasticsearch.reindex.service.ReindexingService$ReindexingListener$2.onResponse(ReindexingService.java:210)
at org.elasticsearch.action.bulk.TransportBulkAction$2.finishHim(TransportBulkAction.java:360)
at org.elasticsearch.action.bulk.TransportBulkAction$2.onFailure(TransportBulkAction.java:355)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAsFailed(TransportShardReplicationOperationAction.java:536)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.retry(TransportShardReplicationOperationAction.java:495)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$2.handleException(TransportShardReplicationOperationAction.java:479)
at org.elasticsearch.transport.TransportService$3.run(TransportService.java:290)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I kept monitoring using the head plugin and the Cluster health was in green the entire process.
Any ideas?