Ok I have started again with 2 nodes running locally.
2 ES 5.0.0-rc1 in separate folders with http.port: 9201 configured in second along with host:9301 in the unicast discovery config.
This works fine, the 2 nodes find each other and form a cluster. So surely this rules out firewall and port issues ?
The problem starts when I install x-pack on 1 node. and restart it.
Log file reports that it detects the master, the other node and has started... but then immediately after it reports master_left, reason failed to ping 3 times... then I see the exporter warnings and exceptions again and the node fails to start.
[2016-10-20T08:36:29,219][INFO ][o.e.c.s.ClusterService ] [Danny-Desktop-ES-5.0-node2] detected_master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}, added {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: zen-disco-receive(from master [master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300} committed version [5]])
[2016-10-20T08:36:29,296][INFO ][o.e.h.HttpServer ] [Danny-Desktop-ES-5.0-node2] publish_address {10.0.10.90:9201}, bound_addresses {10.0.10.90:9201}
[2016-10-20T08:36:29,296][INFO ][o.e.n.Node ] [Danny-Desktop-ES-5.0-node2] started
[2016-10-20T08:36:30,299][INFO ][o.e.d.z.ZenDiscovery ] [Danny-Desktop-ES-5.0-node2] master_left [{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2016-10-20T08:36:30,300][WARN ][o.e.d.z.ZenDiscovery ] [Danny-Desktop-ES-5.0-node2] master left (reason = failed to ping, tried [3] times, each with maximum [30s] timeout), current nodes: {{Danny-Desktop-ES-5.0-node2}{mS8oyd-gQLWoOWkW77VYhQ}{WU_yxRiJT_SIj_Yl-wAFSg}{10.0.10.90}{10.0.10.90:9301},}
[2016-10-20T08:36:30,301][INFO ][o.e.c.s.ClusterService ] [Danny-Desktop-ES-5.0-node2] removed {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: master_failed ({Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300})
[2016-10-20T08:36:33,355][INFO ][o.e.c.s.ClusterService ] [Danny-Desktop-ES-5.0-node2] detected_master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}, added {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: zen-disco-receive(from master [master {Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300} committed version [7]])
[2016-10-20T08:36:34,379][INFO ][o.e.d.z.ZenDiscovery ] [Danny-Desktop-ES-5.0-node2] master_left [{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2016-10-20T08:36:34,380][WARN ][o.e.d.z.ZenDiscovery ] [Danny-Desktop-ES-5.0-node2] master left (reason = failed to ping, tried [3] times, each with maximum [30s] timeout), current nodes: {{Danny-Desktop-ES-5.0-node2}{mS8oyd-gQLWoOWkW77VYhQ}{WU_yxRiJT_SIj_Yl-wAFSg}{10.0.10.90}{10.0.10.90:9301},}
[2016-10-20T08:36:34,380][INFO ][o.e.c.s.ClusterService ] [Danny-Desktop-ES-5.0-node2] removed {{Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300},}, reason: master_failed ({Danny-Desktop-ES-5.0}{7SufA_1ETJuAre__QR4low}{dPuuDZVLR--08gFDFyj5HQ}{10.0.10.90}{10.0.10.90:9300})
[2016-10-20T08:36:35,953][INFO ][o.e.x.m.e.Exporters ] [Danny-Desktop-ES-5.0-node2] skipping exporter [default_local] as it isn't ready yet
[2016-10-20T08:36:35,953][ERROR][o.e.x.m.AgentService ] [Danny-Desktop-ES-5.0-node2] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.0.0-rc1.jar:5.0.0-rc1]
at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.0.0-rc1.jar:5.0.0-rc1]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_102]
I also tried bringing down both nodes and installing x-pack on both, then bring back up and it worked.
Is it possible to push out x-pack in a rolling deployment or does it require a whole cluster restart ?