Node disconnecting randomly

Hi,
We could see that nodes are disconnecting frequently from cluster. Below are the errors from Master and data nodes. We are using ELK version 5.3.1. Could you please help us to prevent the node disconnection issue?

**Error from Master node :** 
            [2021-02-22T12:38:16,153][INFO ][o.e.c.r.a.AllocationService] [elk-denmod-web] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300} transport disconnected]).
                        [2021-02-22T12:38:16,153][INFO ][o.e.c.s.ClusterService   ] [elk-denmod-web] removed {{elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300},}, reason: zen-disco-node-failed({elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300}), reason(transport disconnected)[{elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300} transport disconnected]
                        [2021-02-22T12:38:18,197][INFO ][o.e.c.r.DelayedAllocationService] [elk-denmod-web] scheduling reroute for delayed shards in [57.9s] (44 delayed shards)
                        [2021-02-22T12:38:20,318][INFO ][o.e.c.s.ClusterService   ] [elk-denmod-web] added {{elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300},}, reason: zen-disco-node-join[{elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300}]
                        [2021-02-22T12:39:45,620][INFO ][o.e.c.r.a.AllocationService] [elk-denmod-web] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[en_m_54][2]] ...]).
                        [2021-02-22T12:41:57,612][WARN ][o.e.l.LicenseService     ] [elk-denmod-web]

    **Error from data node:**
[2021-02-22T13:56:31,924][INFO ][o.e.d.z.ZenDiscovery     ] [elk-denmod-6] master_left [{elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300}], reason [transport disconnected]
            [2021-02-22T13:56:31,924][WARN ][o.e.d.z.ZenDiscovery     ] [elk-denmod-6] master left (reason = transport disconnected), current nodes: nodes: 
               {elk-denmod-6}{fU5vCMTTS46Pboz5zMXD0Q}{3Tomsar_ReuXxB8_4Ym0fA}{168.124.25.122}{168.124.25.122:9300}, local
               {elk-denmod-5}{yez29U5iQxWl2hh6Yh-_xg}{IKnp6uBtRGKlE9j3a70S9g}{168.124.54.142}{168.124.54.142:9300}
               {elk-denmod-4}{rS_6QTYGSiKTyXJnjvSdyw}{cMJFNWhxSG-DqTCuyKLoyQ}{168.124.170.244}{168.124.170.244:9300}
               {elk-denmod-1}{t7JmFe-iSn63CALYD-0lxw}{7aHU8PepTH-PAzPrnr-q8w}{168.124.25.140}{168.124.25.140:9300}
               {elk-denmod-3}{5w1hFgnvRlOxtQ0QrX3tKQ}{wh63T78fSmWyamXRZje8jg}{168.124.29.126}{168.124.29.126:9300}
               {elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300}, master
               {elk-denmod-2}{VnvJTaKIQIqrdzXSlPUxxQ}{LIW0GTJdTMei5nWYebyszg}{168.124.29.129}{168.124.29.129:9300}

            [2021-02-22T13:56:52,376][WARN ][o.e.c.NodeConnectionsService] [elk-denmod-6] failed to connect to node {elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300} (tried [1] times)
            org.elasticsearch.transport.ConnectTransportException: [elk-denmod-web][168.124.147.161:9300] general node connection failure
                            at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:519) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:460) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:314) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:301) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.NodeConnectionsService.validateNodeConnected(NodeConnectionsService.java:121) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.NodeConnectionsService.connectToNodes(NodeConnectionsService.java:87) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:780) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:633) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1117) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.3.1.jar:5.3.1]
                            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
                            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
                            at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
            Caused by: java.lang.IllegalStateException: handshake failed, channel already closed
                            at org.elasticsearch.transport.TcpTransport.executeHandshake(TcpTransport.java:1549) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:502) ~[elasticsearch-5.3.1.jar:5.3.1]
                            ... 14 more
            [2021-02-22T13:56:52,423][INFO ][o.e.g.DanglingIndicesState] [elk-denmod-6] failed to send allocated dangled
            org.elasticsearch.discovery.MasterNotDiscoveredException: no master to send allocate dangled request
                            at org.elasticsearch.gateway.LocalAllocateDangledIndices.allocateDangled(LocalAllocateDangledIndices.java:84) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.gateway.DanglingIndicesState.allocateDanglingIndices(DanglingIndicesState.java:164) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.gateway.DanglingIndicesState.processDanglingIndices(DanglingIndicesState.java:82) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.gateway.DanglingIndicesState.clusterChanged(DanglingIndicesState.java:185) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService.lambda$publishAndApplyChanges$11(ClusterService.java:824) ~[elasticsearch-5.3.1.jar:5.3.1]
                            at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) [?:1.8.0_131]
                            at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742) [?:1.8.0_131]
                            at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) [?:1.8.0_131]
                            at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:821) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:633) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1117) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.3.1.jar:5.3.1]
                            at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.3.1.jar:5.3.1]
                            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
                            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
                            at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
            [2021-02-22T13:56:52,454][INFO ][o.e.d.z.ZenDiscovery     ] [elk-denmod-6] failed to send join request to master [{elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300}], reason [RemoteTransportException[[elk-denmod-web][168.124.147.161:9300][internal:discovery/zen/join]]; nested: IllegalStateException[failure when sending a validation request to node]; nested: NodeDisconnectedException[[elk-denmod-6][168.124.25.122:9300][internal:discovery/zen/join/validate] disconnected]; ]
            [2021-02-22T13:56:56,120][INFO ][o.e.c.s.ClusterService   ] [elk-denmod-6] detected_master {elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300}, reason: zen-disco-receive(from master [master {elk-denmod-web}{sMFUmPkRQPiJPstaGBObwg}{SY08ahurSG2uH-TWIs4aJA}{xspw10f206w.pharma.aventis.com}{168.124.147.161:9300} committed version [68667]])

Can you someone help me on this please?

Elasticsearch 5.3.1 is very old and long EOL so I would recommend you upgrade. In order for someone to help troubleshoot this you probably need to provide a lot more details and context. How is your cluster configured and deployed? Are there any other clues in the logs around these times, e.g. frequent or slow GC? What load is the cluster under? What is the use case?

It would also help if you could provide the full output of the cluster stats API.