I have a node in a cluster, which keeps giving this error
[2018-11-07T11:42:44,774][WARN ][o.e.x.s.t.n.SecurityNetty4ServerTransport] [es-node-001] exception caught on transport layer [NettyTcpChannel{localAddress=/173.17.96.31:9300, remoteAddress=/173.16.39.60:49302}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: bad record MAC
stack trace
followed by this message, indicating that the node left the cluster
[2018-11-07T11:42:45,837][INFO ][o.e.d.z.ZenDiscovery ] [es-node-001] master_left [{es-node-001}{V4A7hjvtQcyFW7BlxG-j4w}{Togj8cgQQ9eyZJV32LJB_g}{es-node-001}{173.36.39.60:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2018-11-07T11:42:45,837][WARN ][o.e.d.z.ZenDiscovery ] [es-node-001] master left (reason = failed to ping, tried [3] times, each with maximum [30s] timeout), current nodes: nodes:
{es-node-002}{rMYDt_5ITuCTJbjUPluLJA}{6Y64mGKzQaiUJIWSAhk9bQ}{es-clust-002}{173.36.39.61:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
{es-node-001}{V4A7hjvtQcyFW7BlxG-j4w}{Togj8cgQQ9eyZJV32LJB_g}{es-clust-001}{173.36.39.60:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, master
{es-node-002}{44pA9ErPTb-y3zOylW8Z_Q}{3FmUmjcvRWqrgSNJw6QhOQ}{es-node-002}{173.37.96.32:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}
{es-node-001}{xMUcjow7QzSecNxmKNBJSw}{WaFQ8xs9R72OjCfOn7Jwyw}{es-node-001}{173.37.96.31:9300}{ml.machine_memory=67556810752, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, local
[2018-11-07T11:42:45,839][INFO ][o.e.x.w.WatcherService ] [es-node-001] stopping watch service, reason [no master node]
Then, after this, i see this message, where the node is auto discovered
[2018-11-07T11:42:48,910][INFO ][o.e.c.s.ClusterApplierService] [es-node-001] detected_master {es-clust-001}{V4A7hjvtQcyFW7BlxG-j4w}{Togj8cgQQ9eyZJV32LJB_g}{es-clust-001}{173.36.39.60:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, reason: apply cluster state (from master [master {es-clust-001}{V4A7hjvtQcyFW7BlxG-j4w}{Togj8cgQQ9eyZJV32LJB_g}{es-clust-001}{173.36.39.60:9300}{ml.machine_memory=67556810752, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} committed version [96861]])
The cluster state goes into red for a while, till the shards are re-allocated because of this. Any reason as to why this may be happening as this is causing instability in my cluster.
I do not see this happening for other nodes in the cluster