Network interruption, some nodes not recovering


(Grant) #1

So I've had this happen a couple times, where network issues lead to
some number of nodes in our cluster not rejoining:

es-r08 => this node dropped to yellow and failed to rejoin when all
the other nodes had reformed a cluster and gone green:

[2012-02-08 16:11:49,757][INFO ][discovery.zen ] [prod-es-
r08] master_left [[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[/
10.182.14.95:9300]]], reason [failed to ping, tried [3] times, each
with maximum [30s] timeout]
[2012-02-08 16:11:49,766][INFO ][cluster.service ] [prod-es-
r08] master {new [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[prod-es-
r04.ihost.brewster.com/10.180.35.110:9300]], previous [prod-es-r02][-
OxyqMCbSxWi9DbVcPtH2A][inet[prod-es-r02.ihost.brewster.com/
10.182.14.95:9300]]}, removed {[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A]
[inet[prod-es-r02.ihost.brewster.com/10.182.14.95:9300]],}, reason:
zen-disco-master_failed ([prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[/
10.182.14.95:9300]])
[2012-02-08 16:11:50,772][INFO ][discovery.zen ] [prod-es-
r08] master_left [[prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[prod-es-
r04.ihost.brewster.com/10.180.35.110:9300]]], reason [no longer
master]
[2012-02-08 16:11:50,772][INFO ][cluster.service ] [prod-es-
r08] master {new [prod-es-r08][IO4LAR4eSX6d9tQdQ5f4YQ][inet[prod-es-
r08.ihost.brewster.com/10.180.48.255:9300]], previous [prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[prod-es-r04.ihost.brewster.com/
10.180.35.110:9300]]}, removed {[prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw]
[inet[prod-es-r04.ihost.brewster.com/10.180.35.110:9300]],}, reason:
zen-disco-master_failed ([prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw]
[inet[prod-es-r04.ihost.brewster.com/10.180.35.110:9300]])
[2012-02-08 16:11:52,651][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:52,805][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:52,841][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:53,318][WARN ][http.netty ] [prod-es-
r08] Caught exception while handling client http traffic, closing
connection [id: 0x52fa512f, /10.182.14.95:36064 => /
10.180.48.255:9200]
java.lang.IllegalArgumentException: empty text
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
103)
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at
org.elasticsearch.common.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:
81)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
198)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
107)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
470)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
443)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
81)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
351)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
282)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
202)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-02-08 16:11:53,831][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:53,901][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:54,438][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:54,819][WARN ][http.netty ] [prod-es-
r08] Caught exception while handling client http traffic, closing
connection [id: 0x455a08c8, /10.182.14.95:36084 => /
10.180.48.255:9200]
java.lang.IllegalArgumentException: empty text
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
103)
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at
org.elasticsearch.common.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:
81)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
198)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
107)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
470)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
443)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
81)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
351)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
282)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
202)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-02-08 16:11:55,278][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:11:56,319][WARN ][http.netty ] [prod-es-
r08] Caught exception while handling client http traffic, closing
connection [id: 0x073a321f, /10.182.14.95:36091 => /
10.180.48.255:9200]
java.lang.IllegalArgumentException: empty text
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
103)
at
org.elasticsearch.common.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at
org.elasticsearch.common.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:
81)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
198)
at
org.elasticsearch.common.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
107)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
470)
at
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
443)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:
80)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline
$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:
783)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
81)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:
559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
274)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:
261)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:
351)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
282)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:
202)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[2012-02-08 16:11:56,360][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:12:02,606][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:12:03,194][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:12:03,674][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:12:03,981][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]
[2012-02-08 16:12:04,196][WARN ][discovery.zen ] [prod-es-
r08] master should not receive new cluster state from [[prod-es-r04]
[1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]]

I restart the node and it rejoins:

[2012-02-08 16:17:47,913][INFO ][node ] [prod-es-
r08] {0.18.7}[10530]: stopping ...
[2012-02-08 16:17:48,086][INFO ][node ] [prod-es-
r08] {0.18.7}[10530]: stopped
[2012-02-08 16:17:48,086][INFO ][node ] [prod-es-
r08] {0.18.7}[10530]: closing ...
[2012-02-08 16:17:48,141][INFO ][node ] [prod-es-
r08] {0.18.7}[10530]: closed
[2012-02-08 16:18:01,147][INFO ][bootstrap ]
max_open_files [65507]
[2012-02-08 16:18:01,253][WARN ][common.jna ] Unknown
mlockall error 0
[2012-02-08 16:18:01,263][INFO ][node ] [prod-es-
r08] {0.18.7}[21023]: initializing ...
[2012-02-08 16:18:01,332][INFO ][plugins ] [prod-es-
r08] loaded [transport-thrift, hashing-analyzer], sites [bigdesk,
transport-thrift, head]
[2012-02-08 16:18:04,667][INFO ][node ] [prod-es-
r08] {0.18.7}[21023]: initialized
[2012-02-08 16:18:04,667][INFO ][node ] [prod-es-
r08] {0.18.7}[21023]: starting ...
[2012-02-08 16:18:04,699][INFO ][thrift ] [prod-es-
r08] bound on port [9500]
[2012-02-08 16:18:04,785][INFO ][transport ] [prod-es-
r08] bound_address {inet[/10.180.48.255:9300]}, publish_address {inet[/
10.180.48.255:9300]}
[2012-02-08 16:18:07,965][INFO ][cluster.service ] [prod-es-
r08] detected_master [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[/
10.180.35.110:9300]], added {[prod-es-r07][V3ru_hjBTaqgzMa7XQ_E_Q]
[inet[/10.180.48.216:9300]],[prod-es-r01][aGaKZ7oqT6-uTPktGffjdg]
[inet[/10.180.48.178:9300]],[prod-es-r03][f-xDmnnCS26qqECIZiR68w]
[inet[/10.182.14.97:9300]],[prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[/
10.180.35.110:9300]],[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[/
10.182.14.95:9300]],[prod-es-r05][tRwy2ed6Tpay3ay0iinDJA][inet[/
10.180.39.14:9300]],[prod-es-r06][K9y2GoEwTRqLhN-Kcx5jjw][inet[/
10.180.46.203:9300]],}, reason: zen-disco-receive(from master [[prod-
es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[/10.180.35.110:9300]]])
[2012-02-08 16:18:08,045][INFO ][discovery ] [prod-es-
r08] brewster/XUTG4WkaRoe6KKRcI4NnkA
[2012-02-08 16:18:08,308][INFO ][http ] [prod-es-
r08] bound_address {inet[/10.180.48.255:9200]}, publish_address {inet[/
10.180.48.255:9200]}
[2012-02-08 16:18:08,310][INFO ][node ] [prod-es-
r08] {0.18.7}[21023]: started

es-r04:

[2012-02-08 16:11:52,589][INFO ][discovery.zen ] [prod-es-
r04] master_left [[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[prod-es-
r02.ihost.brewster.com/10.182.14.95:9300]]], reason [failed to ping,
tried [3] times, each with maximum [30s] timeout]
[2012-02-08 16:11:52,590][INFO ][cluster.service ] [prod-es-
r04] master {new [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw][inet[prod-es-
r04.ihost.brewster.com/10.180.35.110:9300]], previous [prod-es-r02][-
OxyqMCbSxWi9DbVcPtH2A][inet[prod-es-r02.ihost.brewster.com/
10.182.14.95:9300]]}, removed {[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A]
[inet[prod-es-r02.ihost.brewster.com/10.182.14.95:9300]],}, reason:
zen-disco-master_failed ([prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A]
[inet[prod-es-r02.ihost.brewster.com/10.182.14.95:9300]])
[2012-02-08 16:11:52,829][WARN ][indices.cluster ] [prod-es-
r04] [contact_documents-5-0][0] failed to start shard
org.elasticsearch.indices.recovery.RecoveryFailedException: Index
Shard [contact_documents-5-0][0]: Recovery failed from [prod-es-r08]
[IO4LAR4eSX6d9tQdQ5f4YQ][inet[prod-es-r08.ihost.brewster.com/
10.180.48.255:9300]] into [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw]
[inet[prod-es-r04.ihost.brewster.com/10.180.35.110:9300]]
at
org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:
263)
at org.elasticsearch.indices.recovery.RecoveryTarget.access
$100(RecoveryTarget.java:73)
at org.elasticsearch.indices.recovery.RecoveryTarget
$2.run(RecoveryTarget.java:161)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.elasticsearch.transport.RemoteTransportException: [prod-
es-r08][inet[/10.180.48.255:9300]][index/shard/recovery/startRecovery]
Caused by:
org.elasticsearch.transport.NotSerializableTransportException:
[org.elasticsearch.index.engine.RecoveryEngineException]
[contact_documents-5-0][0] Phase[1] Execution failed;
[contact_documents-5-0][0] Failed to transfer [36] files with total
size of [15.2mb]; [prod-es-r04][inet[/10.180.35.110:9300]][index/shard/
recovery/filesInfo]; [prod-es-r04][inet[/10.180.35.110:9300]] Node not
connected;
[2012-02-08 16:11:52,830][WARN ][cluster.action.shard ] [prod-es-
r04] sending failed shard for [contact_documents-5-0][0],
node[1uUMHJs_T7aj_3GNcJlZVw], [R], s[INITIALIZING], reason [Failed to
start shard, message [RecoveryFailedException[Index Shard
[contact_documents-5-0][0]: Recovery failed from [prod-es-r08]
[IO4LAR4eSX6d9tQdQ5f4YQ][inet[prod-es-r08.ihost.brewster.com/
10.180.48.255:9300]] into [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw]
[inet[prod-es-r04.ihost.brewster.com/10.180.35.110:9300]]]; nested:
RemoteTransportException[[prod-es-r08][inet[/10.180.48.255:9300]]
[index/shard/recovery/startRecovery]]; nested:
NotSerializableTransportException[[org.elasticsearch.index.engine.RecoveryEngineException]
[contact_documents-5-0][0] Phase[1] Execution failed;
[contact_documents-5-0][0] Failed to transfer [36] files with total
size of [15.2mb]; [prod-es-r04][inet[/10.180.35.110:9300]][index/shard/
recovery/filesInfo]; [prod-es-r04][inet[/10.180.35.110:9300]] Node not
connected; ]; ]]
[2012-02-08 16:11:52,830][WARN ][cluster.action.shard ] [prod-es-
r04] received shard failed for [contact_documents-5-0][0],
node[1uUMHJs_T7aj_3GNcJlZVw], [R], s[INITIALIZING], reason [Failed to
start shard, message [RecoveryFailedException[Index Shard
[contact_documents-5-0][0]: Recovery failed from [prod-es-r08]
[IO4LAR4eSX6d9tQdQ5f4YQ][inet[prod-es-r08.ihost.brewster.com/
10.180.48.255:9300]] into [prod-es-r04][1uUMHJs_T7aj_3GNcJlZVw]
[inet[prod-es-r04.ihost.brewster.com/10.180.35.110:9300]]]; nested:
RemoteTransportException[[prod-es-r08][inet[/10.180.48.255:9300]]
[index/shard/recovery/startRecovery]]; nested:
NotSerializableTransportException[[org.elasticsearch.index.engine.RecoveryEngineException]
[contact_documents-5-0][0] Phase[1] Execution failed;
[contact_documents-5-0][0] Failed to transfer [36] files with total
size of [15.2mb]; [prod-es-r04][inet[/10.180.35.110:9300]][index/shard/
recovery/filesInfo]; [prod-es-r04][inet[/10.180.35.110:9300]] Node not
connected; ]; ]]
[2012-02-08 16:11:56,355][INFO ][cluster.service ] [prod-es-
r04] added {[prod-es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[/
10.182.14.95:9300]],}, reason: zen-disco-receive(join from node[[prod-
es-r02][-OxyqMCbSxWi9DbVcPtH2A][inet[/10.182.14.95:9300]]])
[2012-02-08 16:17:48,097][INFO ][cluster.service ] [prod-es-
r04] removed {[prod-es-r08][IO4LAR4eSX6d9tQdQ5f4YQ][inet[prod-es-
r08.ihost.brewster.com/10.180.48.255:9300]],}, reason: zen-disco-
node_failed([prod-es-r08][IO4LAR4eSX6d9tQdQ5f4YQ][inet[/
10.180.48.255:9300]]), reason transport disconnected (with verified
connect)
[2012-02-08 16:18:07,860][INFO ][cluster.service ] [prod-es-
r04] added {[prod-es-r08][XUTG4WkaRoe6KKRcI4NnkA][inet[/
10.180.48.255:9300]],}, reason: zen-disco-receive(join from node[[prod-
es-r08][XUTG4WkaRoe6KKRcI4NnkA][inet[/10.180.48.255:9300]]])


(system) #2