under heavy load - yes and no) I, (thought) I stopped all of my apps from writing to elasticsearch as I was updating it from 6.2.1 to 6.2.2... after a while master node was elected, no issues other than the one I mentioned, I do see these as well:
esm1 | [2018-02-23T16:22:51,187][INFO ][o.e.m.j.JvmGcMonitorService] [esm1] [gc][47757] overhead, spent [264ms] collecting in the last [1s]
all elasticsearch nodes (cluster) are at Google Cloud Platform (Compute Engine). I did not notice any network issues between nodes (all nodes are part of same network).
I also noticed following as well:
esm1 | [2018-02-23T14:59:02,410][INFO ][o.e.m.j.JvmGcMonitorService] [esm1] [gc][42734] overhead, spent [263ms] collecting in the last [1s]
esm1 | [2018-02-23T15:15:39,597][INFO ][o.e.c.s.MasterService ] [esm1] zen-disco-node-left({esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true}), reason(left)[{esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true} left], reason: removed {{esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true},}
esm1 | [2018-02-23T15:15:39,676][INFO ][o.e.c.s.ClusterApplierService] [esm1] removed {{esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true},}, reason: apply cluster state (from master [master {esm1}{LM1Whu2_RyaGBk-vK2A9Eg}{LW0TNipgQECec2V7gXSwBA}{10.0.0.234}{10.0.0.234:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true} committed version [11380] source [zen-disco-node-left({esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true}), reason(left)[{esc3}{G5HTaP8RQBWcNS-9QPX57Q}{kZ8mKLRrQTSWx9l1BYiWaQ}{10.0.0.238}{10.0.0.238:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true} left]]])
esm1 | [2018-02-23T15:18:54,001][INFO ][o.e.c.s.MasterService ] [esm1] zen-disco-node-join[{esc3}{nR9iRdk0TZeQ9a0IPZerhQ}{cXjqWITURtCm2xS4KFguGA}{10.0.0.17}{10.0.0.17:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true}], reason: added {{esc3}{nR9iRdk0TZeQ9a0IPZerhQ}{cXjqWITURtCm2xS4KFguGA}{10.0.0.17}{10.0.0.17:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true},}
esm1 | [2018-02-23T15:18:59,353][INFO ][o.e.c.s.ClusterApplierService] [esm1] added {{esc3}{nR9iRdk0TZeQ9a0IPZerhQ}{cXjqWITURtCm2xS4KFguGA}{10.0.0.17}{10.0.0.17:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true},}, reason: apply cluster state (from master [master {esm1}{LM1Whu2_RyaGBk-vK2A9Eg}{LW0TNipgQECec2V7gXSwBA}{10.0.0.234}{10.0.0.234:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true} committed version [11382] source [zen-disco-node-join[{esc3}{nR9iRdk0TZeQ9a0IPZerhQ}{cXjqWITURtCm2xS4KFguGA}{10.0.0.17}{10.0.0.17:9300}{ml.machine_memory=7847518208, ml.max_open_jobs=20, ml.enabled=true}]]])
esm1 | [2018-02-23T15:19:32,938][INFO ][o.e.c.m.MetaDataMappingService] [esm1] [logstash-2018.02.23/l2maaiOWQBSf08rqEg-icg] update_mapping [doc]
and this (master node):