Node is disconnected from cluster and does not join existing cluster (ES 7.16.2)

Hi Team,

We have an elasticsearch(ES 7.16.2) cluster of 3 nodes in which one node(node-3) is randomly disconnected from the cluster.
Node info:
node-1 = master, data (currently master)
node-2 = master, data
node-3 = master, data(not able to join cluster)

Node-1 logs when node-3 is randomly disconnected from the cluster:

[2022-11-27T05:30:04,688][INFO ][o.e.x.i.IndexLifecycleTransition] [node-1] moving index [.monitoring-kibana-7-2022.11.27] from [{"phase":"hot","action":"set_priority","name":"set_priority"}] to [{"phase":"hot","action":"complete","name":"complete"}] in policy [45DaysData]
[2022-11-27T05:30:04,943][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-7-2022.11.27][0]]]).
[2022-11-27T06:09:27,809][ERROR][o.e.x.m.c.i.IndexStatsCollector] [node-1] collector [index-stats] timed out when collecting data: node [dlzluyvNTAiDA6PJjnyMPw] did not respond within [10s]
[2022-11-27T06:09:32,705][INFO ][o.e.t.ClusterConnectionManager] [node-1] transport connection to [{node-3}{dlzluyvNTAiDA6PJjnyMPw}{xJclS9LiTvGOlvHGXnMGyQ}{node-3}{node-3:9300}{cdfhilmrstw}] closed by remote
[2022-11-27T06:09:32,752][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [GREEN] to [YELLOW] (reason: [{node-3}{dlzluyvNTAiDA6PJjnyMPw}{xJclS9LiTvGOlvHGXnMGyQ}{node-3}{node-3:9300}{cdfhilmrstw} reason: disconnected]).
[2022-11-27T06:09:32,837][INFO ][o.e.c.s.MasterService    ] [node-1] node-left[{node-3}{dlzluyvNTAiDA6PJjnyMPw}{xJclS9LiTvGOlvHGXnMGyQ}{node-3}{node-3:9300}{cdfhilmrstw} reason: disconnected], term: 282, version: 967964, delta: removed {{node-3}{dlzluyvNTAiDA6PJjnyMPw}{xJclS9LiTvGOlvHGXnMGyQ}{node-3}{node-3:9300}{cdfhilmrstw}}
[2022-11-27T06:09:33,473][INFO ][o.e.c.s.ClusterApplierService] [node-1] removed {{node-3}{dlzluyvNTAiDA6PJjnyMPw}{xJclS9LiTvGOlvHGXnMGyQ}{node-3}{node-3:9300}{cdfhilmrstw}}, term: 282, version: 967964, reason: Publication{term=282, version=967964}

After this, node-3 running isolately and not able to connect with cluster again, throwing below connect exception.
Two nodes are able to form cluster after multiple restart and we don't see any issue in both of them but third node is unable to join the cluster, also tried these with new data folder as a fresh node. Even then same issue is observed.

Node-3 debug logs when it is not able to join cluster:

[2022-11-30T13:08:50,895][INFO ][o.e.i.g.DatabaseNodeService] [node-3] initialized database registry, using geoip-databases directory 
[2022-11-30T13:08:51,900][DEBUG][i.n.u.ResourceLeakDetector] [node-3] -Dio.netty.leakDetection.level: simple
[2022-11-30T13:08:51,900][DEBUG][i.n.u.ResourceLeakDetector] [node-3] -Dio.netty.leakDetection.targetRecords: 4
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.numHeapArenas: 8
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.numDirectArenas: 0
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.pageSize: 8192
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.maxOrder: 11
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.chunkSize: 16777216
[2022-11-30T13:08:51,916][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.smallCacheSize: 256
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.normalCacheSize: 64
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.cacheTrimInterval: 8192
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.useCacheForAllThreads: true
[2022-11-30T13:08:51,932][DEBUG][i.n.b.PooledByteBufAllocator] [node-3] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[2022-11-30T13:08:51,947][DEBUG][i.n.u.i.InternalThreadLocalMap] [node-3] -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
[2022-11-30T13:08:51,947][DEBUG][i.n.u.i.InternalThreadLocalMap] [node-3] -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
[2022-11-30T13:08:51,963][INFO ][o.e.t.NettyAllocator     ] [node-3] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-30T13:08:51,994][DEBUG][o.e.h.n.Netty4HttpServerTransport] [node-3] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[64kb], max_composite_buffer_components[69905], pipelining_max_events[10000]
[2022-11-30T13:08:52,010][DEBUG][o.e.i.r.RecoverySettings ] [node-3] using max_bytes_per_sec[40mb]
[2022-11-30T13:08:52,041][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [node-3] using initial hosts [node-1, node-2]
[2022-11-30T13:08:52,072][INFO ][o.e.d.DiscoveryModule    ] [node-3] using discovery type [zen] and seed hosts providers [settings]
[2022-11-30T13:08:53,087][INFO ][o.e.g.DanglingIndicesState] [node-3] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
[2022-11-30T13:08:53,706][DEBUG][o.e.n.Node               ] [node-3] initializing HTTP handlers ...
[2022-11-30T13:08:54,003][INFO ][o.e.n.Node               ] [node-3] initialized
[2022-11-30T13:08:54,003][INFO ][o.e.n.Node               ] [node-3] starting ...
[2022-11-30T13:08:54,003][DEBUG][o.e.l.LicenseService     ] [node-3] initializing license state
[2022-11-30T13:08:54,003][DEBUG][o.e.x.m.MonitoringService] [node-3] monitoring service is starting
[2022-11-30T13:08:54,003][DEBUG][o.e.x.m.MonitoringService] [node-3] monitoring service started
[2022-11-30T13:08:54,003][DEBUG][o.e.x.m.c.CleanerService ] [node-3] starting cleaning service
[2022-11-30T13:08:54,003][DEBUG][o.e.x.m.c.CleanerService ] [node-3] cleaning service started
[2022-11-30T13:08:54,003][DEBUG][o.e.x.s.c.f.PersistentCache] [node-3] loading persistent cache on data path [NodePath{path=D:\Elasticsearch_Data8\nodes\0, indicesPath=D:\Elasticsearch_Data8\nodes\0\indices, fileStore=New Volume (D:), majorDeviceNumber=-1, minorDeviceNumber=-1}]
[2022-11-30T13:08:54,003][DEBUG][o.e.x.s.c.f.PersistentCache] [node-3] committing
[2022-11-30T13:08:54,019][INFO ][o.e.x.s.c.f.PersistentCache] [node-3] persistent cache index loaded
[2022-11-30T13:08:54,035][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node-3] deprecation component started
[2022-11-30T13:08:54,050][DEBUG][i.n.c.MultithreadEventLoopGroup] [node-3] -Dio.netty.eventLoopThreads: 8
[2022-11-30T13:08:54,081][DEBUG][i.n.c.n.NioEventLoop     ] [node-3] -Dio.netty.noKeySetOptimization: true
[2022-11-30T13:08:54,081][DEBUG][i.n.c.n.NioEventLoop     ] [node-3] -Dio.netty.selectorAutoRebuildThreshold: 512
[2022-11-30T13:08:54,081][DEBUG][i.n.u.i.PlatformDependent] [node-3] org.jctools-core.MpscChunkedArrayQueue: unavailable
[2022-11-30T13:08:54,128][DEBUG][o.e.t.n.Netty4Transport  ] [node-3] using profile[default], worker_count[4], port[9300-9400], bind_host[[node-3]], publish_host[[]], receive_predictor[64kb->64kb]
[2022-11-30T13:08:54,144][DEBUG][o.e.t.TcpTransport       ] [node-3] binding server bootstrap to: [node-3]
[2022-11-30T13:08:54,160][DEBUG][i.n.c.DefaultChannelId   ] [node-3] -Dio.netty.processId: 1968 (auto-detected)
[2022-11-30T13:08:54,160][DEBUG][i.n.u.NetUtil            ] [node-3] -Djava.net.preferIPv4Stack: false
[2022-11-30T13:08:54,160][DEBUG][i.n.u.NetUtil            ] [node-3] -Djava.net.preferIPv6Addresses: false
[2022-11-30T13:08:54,160][DEBUG][i.n.u.NetUtilInitializations] [node-3] Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
[2022-11-30T13:08:54,160][DEBUG][i.n.u.NetUtil            ] [node-3] Failed to get SOMAXCONN from sysctl and file. Default: 200
[2022-11-30T13:08:54,206][DEBUG][i.n.b.ByteBufUtil        ] [node-3] -Dio.netty.allocator.type: pooled
[2022-11-30T13:08:54,206][DEBUG][i.n.b.ByteBufUtil        ] [node-3] -Dio.netty.threadLocalDirectBufferSize: 0
[2022-11-30T13:08:54,206][DEBUG][i.n.b.ByteBufUtil        ] [node-3] -Dio.netty.maxThreadLocalCharBufferSize: 16384
[2022-11-30T13:08:54,222][DEBUG][o.e.t.TcpTransport       ] [node-3] Bound profile [default] to address {nod-3:9300}
[2022-11-30T13:08:54,222][INFO ][o.e.t.TransportService   ] [node-3] publish_address {node-3:9300}, bound_addresses {node-3:9300}
[2022-11-30T13:08:54,410][DEBUG][o.e.g.PersistedClusterStateService] [node-3] writing cluster state took [204ms]; wrote full state with [0] indices
[2022-11-30T13:08:54,425][INFO ][o.e.b.BootstrapChecks    ] [node-3] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-11-30T13:08:54,441][DEBUG][o.e.d.SeedHostsResolver  ] [node-3] using max_concurrent_resolvers [10], resolver timeout [5s]
[2022-11-30T13:08:54,441][DEBUG][o.e.x.m.i.TrainedModelStatsService] [node-3] About to start TrainedModelStatsService
[2022-11-30T13:08:54,441][DEBUG][o.e.t.TransportService   ] [node-3] now accepting incoming requests
[2022-11-30T13:08:54,441][DEBUG][o.e.c.c.Coordinator      ] [node-3] startInitialJoin: coordinator becoming CANDIDATE in term 283 (was null, lastKnownLeader was [Optional.empty])
[2022-11-30T13:08:54,456][DEBUG][o.e.n.Node               ] [node-3] waiting to join the cluster. timeout [30s]
[2022-11-30T13:08:54,521][DEBUG][i.n.b.AbstractByteBuf    ] [node-3] -Dio.netty.buffer.checkAccessible: true
[2022-11-30T13:08:54,522][DEBUG][i.n.b.AbstractByteBuf    ] [node-3] -Dio.netty.buffer.checkBounds: true
[2022-11-30T13:08:54,522][DEBUG][i.n.u.ResourceLeakDetectorFactory] [node-3] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@2ac2fed5
[2022-11-30T13:08:54,532][DEBUG][i.n.u.Recycler           ] [node-3] -Dio.netty.recycler.maxCapacityPerThread: disabled
[2022-11-30T13:08:54,532][DEBUG][i.n.u.Recycler           ] [node-3] -Dio.netty.recycler.maxSharedCapacityFactor: disabled
[2022-11-30T13:08:54,532][DEBUG][i.n.u.Recycler           ] [node-3] -Dio.netty.recycler.linkCapacity: disabled
[2022-11-30T13:08:54,532][DEBUG][i.n.u.Recycler           ] [node-3] -Dio.netty.recycler.ratio: disabled
[2022-11-30T13:08:54,532][DEBUG][i.n.u.Recycler           ] [node-3] -Dio.netty.recycler.delayedQueue.ratio: disabled
[2022-11-30T13:08:54,578][DEBUG][o.e.t.TcpTransport       ] [node-3] opened transport connection [2] to [{node-1:9300}{t33aTKxBTbOqvGXREoG6Jw}{node-1}{node-1:9300}] using channels [[Netty4TcpChannel{localAddress=/node-3:58443, remoteAddress=/node-1:9300, profile=default}]]
[2022-11-30T13:08:54,578][DEBUG][o.e.t.TcpTransport       ] [node-3] opened transport connection [1] to [{node-2:9300}{EGv7AxS7SyenjdqdXOrnzw}{node-2}{node-2:9300}] using channels [[Netty4TcpChannel{localAddress=/node-3:58444, remoteAddress=/node-2:9300, profile=default}]]
[2022-11-30T13:08:54,630][DEBUG][o.e.t.TcpTransport       ] [node-3] closed transport connection [2] to [{node-1:9300}{t33aTKxBTbOqvGXREoG6Jw}{node-1}{node-1:9300}] with age [0ms]
[2022-11-30T13:08:54,650][DEBUG][o.e.t.TcpTransport       ] [node-3] closed transport connection [1] to [{node-2:9300}{EGv7AxS7SyenjdqdXOrnzw}{node-2}{node-2:9300}] with age [0ms]
[2022-11-30T13:08:54,687][DEBUG][o.e.t.TcpTransport       ] [node-3] opened transport connection [3] to [{node-1}{EJ4ZGhnYQsK5mI4VEfS1KA}{eCuOAYNOQUGtWdVugw2B_g}{node-1}{node-1:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=4294967296, transform.node=true}] using channels [[Netty4TcpChannel{localAddress=/node-3:58460, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58445, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58450, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58446, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58464, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58447, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58453, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58448, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58466, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58449, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58456, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58451, remoteAddress=node-1/node-1:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58452, remoteAddress=node-1/node-1:9300, profile=default}]]
[2022-11-30T13:08:54,703][DEBUG][o.e.t.TcpTransport       ] [node-3] opened transport connection [4] to [{node-2}{oCJTce5TTqyVIrYII2NGHw}{ICS-uBAeSnmr9fx-Si5GRA}{node-2}{node-2:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=8589934592, transform.node=true}] using channels [[Netty4TcpChannel{localAddress=/node-3:58467, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58457, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58454, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58468, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58455, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58458, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58459, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58469, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58461, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58463, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58462, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58470, remoteAddress=node-2/node-2:9300, profile=default}, Netty4TcpChannel{localAddress=/node-3:58465, remoteAddress=node-2/node-2:9300, profile=default}]]
[2022-11-30T13:08:54,711][DEBUG][o.e.t.ClusterConnectionManager] [node-3] connected to node [{node-2}{oCJTce5TTqyVIrYII2NGHw}{ICS-uBAeSnmr9fx-Si5GRA}{node-2}{node-2:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=8589934592, transform.node=true}]
[2022-11-30T13:08:54,711][DEBUG][o.e.t.ClusterConnectionManager] [node-3] connected to node [{node-1}{EJ4ZGhnYQsK5mI4VEfS1KA}{eCuOAYNOQUGtWdVugw2B_g}{node-1}{node-1:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=4294967296, transform.node=true}]
[2022-11-30T13:08:54,758][DEBUG][o.e.c.c.JoinHelper       ] [node-3] attempting to join {node-1}{EJ4ZGhnYQsK5mI4VEfS1KA}{eCuOAYNOQUGtWdVugw2B_g}{node-1}{node-1:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=4294967296, transform.node=true} with JoinRequest{sourceNode={node-3}{iec-uYA3R7eHPZba26e6Tg}{sYPcltL9Q0eU6_l_3q-jIA}{node-3}{node-3:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=6442450944}, minimumTerm=283, optionalJoin=Optional.empty}
[2022-11-30T13:08:54,764][DEBUG][o.e.t.TcpTransport       ] [node-3] close connection exception caught on transport layer [Netty4TcpChannel{localAddress=/node-3:9300, remoteAddress=/node-1:54804, profile=default}], disconnecting from relevant node
java.net.SocketException: Connection reset
at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
at org.elasticsearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:131) ~[transport-netty4-client-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:116) ~[transport-netty4-client-7.16.2.jar:7.16.2]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final]
at java.lang.Thread.run(Thread.java:833) [?:?]
[2022-11-30T13:08:54,780][INFO ][o.e.c.c.JoinHelper       ] [node-3] failed to join {node-1}{EJ4ZGhnYQsK5mI4VEfS1KA}{eCuOAYNOQUGtWdVugw2B_g}{node-1}{node-1:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, ml.max_open_jobs=512, xpack.installed=true, ml.max_jvm_size=4294967296, transform.node=true} with JoinRequest{sourceNode={node-3}{iec-uYA3R7eHPZba26e6Tg}{sYPcltL9Q0eU6_l_3q-jIA}{node-3}{node-3:9300}{cdfhilmrstw}{ml.machine_memory=17179262976, xpack.installed=true, transform.node=true, ml.max_open_jobs=512, ml.max_jvm_size=6442450944}, minimumTerm=283, optionalJoin=Optional.empty}
org.elasticsearch.transport.RemoteTransportException: [node-1][node-1:9300][internal:cluster/coordination/join]
Caused by: org.elasticsearch.transport.ConnectTransportException: [node-3][node-3:9300] general node connection failure
at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.lambda$onResponse$2(TcpTransport.java:1035) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:144) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TransportHandshaker$HandshakeResponseHandler.handleLocalException(TransportHandshaker.java:155) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TransportHandshaker.lambda$sendHandshake$0(TransportHandshaker.java:52) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:241) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$0(ActionListener.java:277) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.core.CompletableContext.lambda$addListener$0(CompletableContext.java:28) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
at org.elasticsearch.core.CompletableContext.complete(CompletableContext.java:50) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:51) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1182) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:773) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:749) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1352) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:622) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:606) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.handler.logging.LoggingHandler.close(LoggingHandler.java:256) ~[netty-handler-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:622) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:606) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957) ~[?:?]
at io.netty.channel.AbstractChannel.close(AbstractChannel.java:244) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4TcpChannel.close(Netty4TcpChannel.java:91) ~[?:?]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:74) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:116) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:99) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannels(CloseableChannel.java:78) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannel(CloseableChannel.java:67) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannel(CloseableChannel.java:57) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:658) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TcpTransport.onException(TcpTransport.java:638) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.lambda$new$0(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.SecurityTransportExceptionHandler.accept(SecurityTransportExceptionHandler.java:45) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.onException(SecurityNetty4Transport.java:123) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:85) ~[transport-netty4-client-7.16.2.jar:7.16.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.handler.logging.LoggingHandler.exceptionCaught(LoggingHandler.java:214) ~[netty-handler-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final]
at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: org.elasticsearch.transport.TransportException: handshake failed because connection reset
at org.elasticsearch.transport.TransportHandshaker.lambda$sendHandshake$0(TransportHandshaker.java:52) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:241) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$0(ActionListener.java:277) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.core.CompletableContext.lambda$addListener$0(CompletableContext.java:28) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
at org.elasticsearch.core.CompletableContext.complete(CompletableContext.java:50) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.netty4.Netty4TcpChannel.lambda$addListener$0(Netty4TcpChannel.java:51) ~[?:?]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) ~[netty-common-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1182) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:773) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:749) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1352) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:622) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:606) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.handler.logging.LoggingHandler.close(LoggingHandler.java:256) ~[netty-handler-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:622) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:606) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957) ~[?:?]
at io.netty.channel.AbstractChannel.close(AbstractChannel.java:244) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4TcpChannel.close(Netty4TcpChannel.java:91) ~[?:?]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:74) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:116) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:99) ~[elasticsearch-core-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannels(CloseableChannel.java:78) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannel(CloseableChannel.java:67) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.common.network.CloseableChannel.closeChannel(CloseableChannel.java:57) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TcpTransport.handleException(TcpTransport.java:658) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.transport.TcpTransport.onException(TcpTransport.java:638) ~[elasticsearch-7.16.2.jar:7.16.2]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.lambda$new$0(SecurityNetty4Transport.java:81) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.SecurityTransportExceptionHandler.accept(SecurityTransportExceptionHandler.java:45) ~[?:?]
at org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport.onException(SecurityNetty4Transport.java:123) ~[?:?]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.exceptionCaught(Netty4MessageChannelHandler.java:85) ~[transport-netty4-client-7.16.2.jar:7.16.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.handler.logging.LoggingHandler.exceptionCaught(LoggingHandler.java:214) ~[netty-handler-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177) ~[netty-transport-4.1.66.Final.jar:4.1.66.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[?:?]

Thanks,
Pooja

Welcome to our community! :smiley:

Connection reset could suggest a networking issue external to Elasticsearch. Are all of your nodes on the same network?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.