I may have actually answered my questions as to the current issue but any further input as a learning experience is very welcome. 
I think the reason for NODE3 being unavailable right now is the pausing by the JVM GC, as when I look at the logs it's running non-stop.
[2016-08-11 09:10:12,922][INFO ][monitor.jvm ] [NODE3] [gc][old][370309][68673] duration [6.3s], collections [1]/[6.5s], total [6.3s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [51.8mb]->[51.6mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:10:23,538][INFO ][monitor.jvm ] [NODE3] [gc][old][370311][68675] duration [6.3s], collections [1]/[6.4s], total [6.3s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [57.9mb]->[54.1mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:10:34,539][INFO ][monitor.jvm ] [NODE3] [gc][old][370313][68677] duration [6.3s], collections [1]/[6.6s], total [6.3s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [52.3mb]->[52.6mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:10:45,390][INFO ][monitor.jvm ] [NODE3] [gc][old][370315][68679] duration [6.4s], collections [1]/[6.5s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [51.3mb]->[52.8mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:10:56,637][INFO ][monitor.jvm ] [NODE3] [gc][old][370317][68681] duration [6.4s], collections [1]/[7s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [49.5mb]->[55.4mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:11:07,394][INFO ][monitor.jvm ] [NODE3] [gc][old][370319][68683] duration [6.4s], collections [1]/[6.6s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [53.7mb]->[61.8mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:11:18,259][INFO ][monitor.jvm ] [NODE3] [gc][old][370321][68685] duration [6.3s], collections [1]/[6.4s], total [6.3s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [60.2mb]->[52.2mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:11:28,982][INFO ][monitor.jvm ] [NODE3] [gc][old][370323][68687] duration [6.5s], collections [1]/[6.6s], total [6.5s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [52.4mb]->[53.9mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:11:39,874][INFO ][monitor.jvm ] [NODE3] [gc][old][370325][68689] duration [6.4s], collections [1]/[6.6s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [50.6mb]->[56.3mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:11:51,111][INFO ][monitor.jvm ] [NODE3] [gc][old][370327][68691] duration [6.6s], collections [1]/[6.6s], total [6.6s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [66.5mb]->[52.9mb]/[66.5mb]}{[survivor] [5.3mb]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:12:02,033][INFO ][monitor.jvm ] [NODE3] [gc][old][370329][68693] duration [6.4s], collections [1]/[6.5s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [53.4mb]->[56.7mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:12:13,029][INFO ][monitor.jvm ] [NODE3] [gc][old][370331][68695] duration [6.4s], collections [1]/[6.5s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [62.4mb]->[59.3mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:12:23,845][INFO ][monitor.jvm ] [NODE3] [gc][old][370333][68697] duration [6.5s], collections [1]/[6.5s], total [6.5s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [66.5mb]->[56.3mb]/[66.5mb]}{[survivor] [6.8mb]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
[2016-08-11 09:12:34,800][INFO ][monitor.jvm ] [NODE3] [gc][old][370335][68699] duration [6.4s], collections [1]/[6.6s], total [6.4s]/[1.7d], memory [1.9gb]->[1.9gb]/[1.9gb], all_pools {[young] [59.3mb]->[56.2mb]/[66.5mb]}{[survivor] [0b]->[0b]/[8.3mb]}{[old] [1.9gb]->[1.9gb]/[1.9gb]}
If I'm reading this right...
The old pool is not finding any 'dead' memory to GC and therefore constantly running GC in the hope it'll find something on next run, pausing the ElasticSearch operation on this node.
I'm guessing I need to provide more memory or an additional node to the cluster.