Master Node log:
[2020-08-18T15:55:51,187][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index,as index with same name already exists in cluster metadata,
[2020-08-18T15:55:51,187][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_ia/ubcj3LUMRx6HSV5M19LrwQ]] can not be imported as a dangling index,as index with same name already exists in cluster metadata,
[2020-08-18T15:56:23,663][INFO ][o.e.c.m.MetaDataDeleteIndexService]] [2NJ7wN5] [txp_msgs_ia/NmVohVHQRku-CRcOzwEkA] deleting index,
[2020-08-18T15:56:23,717][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_di1/y1YPLXOfTu-tXQojn3ksxQ]] can not be imported as a dangling index,as index with same name already exists in cluster metadata,
[2020-08-18T15:56:23,717][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index,as index with same name already exists in cluster metadata,
[2020-08-18T15:56:23,717][INFO ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_ia/ubcj3LUMRx6HSV5M19LrwQ]] can not be imported as a dangling index,as index with same name already exists in cluster metadata,
[2020-08-19T07:16:10,774][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata,
[2020-08-19T07:16:10,842][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata,
[2020-08-19T07:16:55,865][INFO ][o.e.c.m.MetaDataDeleteIndexService] [2NJ7wN5] [txp_msgs_di1/Ui_SniqSQoW9Wr9KQatFNw] deleting index,
[2020-08-19T07:16:55,935][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata,
[2020-08-19T07:16:55,935][INFO ][o.e.g.LocalAllocateDangledIndices] [2NJ7wN5] auto importing dangled indices [[txp_msgs_di1/y1YPLXOfTu-tXQojn3ksxQ]/OPEN] from [{2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300}],
[2020-08-19T07:16:55,961][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata,
[2020-08-19T07:16:55,977][WARN ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] can not be imported as a dangling index, as index with same name already exists in cluster metadata,
[2020-08-19T07:21:55,781][INFO ][o.e.c.m.MetaDataDeleteIndexService] [2NJ7wN5] [txp_msgs_st1/Db5atkUwTIWjtGwf5U7kFw] deleting index,
[2020-08-19T07:21:55,831][INFO ][o.e.g.DanglingIndicesState] [2NJ7wN5] [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]] dangling index exists on local file system, but not in cluster metadata, auto import to cluster state,
[2020-08-19T07:21:55,831][INFO ][o.e.g.LocalAllocateDangledIndices] [2NJ7wN5] auto importing dangled indices [[txp_msgs_st1/zwAYEoBlT7-kKBxpHlqBRA]/OPEN] from [{612lCgJ}{612lCgJtQCqeN7RDd0_Efg}{D566hxCZSGuMejmb5dn3Yw}{10.0.3.5}{10.0.3.5:9300}],
[2020-08-19T07:22:27,969][INFO ][o.e.c.m.MetaDataDeleteIndexService] [2NJ7wN5] [txp_msgs_st1/zwAYEoBlT7-kKBxpHlqBRA] deleting index,
[2020-08-19T07:22:28,045][INFO ][o.e.g.LocalAllocateDangledIndices] [2NJ7wN5] auto importing dangled indices [[txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w]/OPEN] from [{2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300}],
[2020-08-19T07:52:56,065][INFO ][o.e.c.m.MetaDataDeleteIndexService] [2NJ7wN5] [txp_msgs_di1/y1YPLXOfTu-tXQojn3ksxQ] deleting index,
[2020-08-19T07:52:56,106][WARN ][o.e.d.c.m.MetaDataCreateIndexService] the default number of shards will change from [5] to [1] in 7.0.0; if you wish to continue using the default of [5] shards, you must manage this on the create index request or with an index template,
[2020-08-19T07:52:56,110][INFO ][o.e.c.m.MetaDataCreateIndexService] [2NJ7wN5] [txp_msgs_di1] creating index, cause [api], templates , shards [5]/[1], mappings [txp_msgs_di1],
[2020-08-19T07:53:35,014][INFO ][o.e.c.m.MetaDataDeleteIndexService] [2NJ7wN5] [txp_msgs_st1/BqVwXMJBT2uChYxUdc3k5w] deleting index,
[2020-08-19T07:53:35,076][WARN ][o.e.d.c.m.MetaDataCreateIndexService] the default number of shards will change from [5] to [1] in 7.0.0; if you wish to continue using the default of [5] shards, you must manage this on the create index request or with an index template,
[2020-08-19T07:53:35,079][INFO ][o.e.c.m.MetaDataCreateIndexService] [2NJ7wN5] [txp_msgs_st1] creating index, cause [api], templates , shards [5]/[1], mappings [txp_msgs_st1],
[2020-08-19T07:53:36,032][INFO ][o.e.c.r.a.AllocationService] [2NJ7wN5] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[txp_msgs_st1][3]] ...]).,
[2020-08-19T14:31:04,052][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-left({C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{iY4WtQZUSnWnn0pkCeS9IA}{10.0.3.11}{10.0.3.11:9300}), reason(left), reason: removed {{C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{iY4WtQZUSnWnn0pkCeS9IA}{10.0.3.11}{10.0.3.11:9300},},
[2020-08-19T14:31:04,876][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] removed {{C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{iY4WtQZUSnWnn0pkCeS9IA}{10.0.3.11}{10.0.3.11:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [797] source [zen-disco-node-left({C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{iY4WtQZUSnWnn0pkCeS9IA}{10.0.3.11}{10.0.3.11:9300}), reason(left)]]),
[2020-08-19T14:31:05,086][INFO ][o.e.c.r.DelayedAllocationService] [2NJ7wN5] scheduling reroute for delayed shards in [58.9s] (303 delayed shards),
[2020-08-19T14:31:08,745][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-left({YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{MiUy7ODuTL2IKmd7kgnRWw}{10.0.3.10}{10.0.3.10:9300}), reason(left), reason: removed {{YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{MiUy7ODuTL2IKmd7kgnRWw}{10.0.3.10}{10.0.3.10:9300},},
[2020-08-19T14:31:08,841][WARN ][o.e.c.NodeConnectionsService] [2NJ7wN5] failed to connect to node {YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{MiUy7ODuTL2IKmd7kgnRWw}{10.0.3.10}{10.0.3.10:9300} (tried [1] times),
org.elasticsearch.transport.ConnectTransportException: [YsOw878][10.0.3.10:9300] connect_exception,
at org.elasticsearch.transport.TcpChannel.awaitConnected(TcpChannel.java:165) ~[elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:643) ~[elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:542) ~[elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:329) ~[elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:316) ~[elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.cluster.NodeConnectionsService.validateAndConnectIfNeeded(NodeConnectionsService.java:153) [elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.cluster.NodeConnectionsService$ConnectionChecker.doRun(NodeConnectionsService.java:180) [elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.4.0.jar:6.4.0],
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.4.0.jar:6.4.0],
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) [?:?],
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?],
at java.lang.Thread.run(Thread.java:844) [?:?],
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 10.0.3.10/10.0.3.10:9300,
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?],
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?],
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?],
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?],
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?],
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?],
... 1 more,
Caused by: java.net.ConnectException: Connection refused,
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?],
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?],
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) ~[?:?],
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[?:?],
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[?:?],
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[?:?],
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?],
... 1 more,
[2020-08-19T14:31:09,606][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] removed {{YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{MiUy7ODuTL2IKmd7kgnRWw}{10.0.3.10}{10.0.3.10:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [798] source [zen-disco-node-left({YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{MiUy7ODuTL2IKmd7kgnRWw}{10.0.3.10}{10.0.3.10:9300}), reason(left)]]),
[2020-08-19T14:31:13,554][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-left({o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{gCbtQv72TtS0f0NRh3Z00Q}{10.0.3.12}{10.0.3.12:9300}), reason(left), reason: removed {{o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{gCbtQv72TtS0f0NRh3Z00Q}{10.0.3.12}{10.0.3.12:9300},},
[2020-08-19T14:31:13,889][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] removed {{o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{gCbtQv72TtS0f0NRh3Z00Q}{10.0.3.12}{10.0.3.12:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [800] source [zen-disco-node-left({o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{gCbtQv72TtS0f0NRh3Z00Q}{10.0.3.12}{10.0.3.12:9300}), reason(left)]]),
[2020-08-19T14:31:14,183][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-join, reason: added {{C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{HHiKUGZ8TZWgySAC-R7IdA}{10.0.3.13}{10.0.3.13:9300},},
[2020-08-19T14:31:14,873][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] added {{C_K1qtY}{C_K1qtYzTB6IrZsQQOagNA}{HHiKUGZ8TZWgySAC-R7IdA}{10.0.3.13}{10.0.3.13:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [801] source [zen-disco-node-join]]),
[2020-08-19T14:31:20,896][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-join, reason: added {{YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{8pnQxkKxQhq86X0hZbcbtw}{10.0.3.14}{10.0.3.14:9300},},
[2020-08-19T14:31:21,802][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] added {{YsOw878}{YsOw878NQTuPHI2yqMQ7Lg}{8pnQxkKxQhq86X0hZbcbtw}{10.0.3.14}{10.0.3.14:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [881] source [zen-disco-node-join]]),
[2020-08-19T14:31:22,821][INFO ][o.e.c.r.a.AllocationService] [2NJ7wN5] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[txp_msgs_mo][1]] ...]).,
[2020-08-19T14:31:24,131][INFO ][o.e.c.s.MasterService ] [2NJ7wN5] zen-disco-node-join, reason: added {{o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{c0KukbiYQSi_LxIvNBGgfw}{10.0.3.15}{10.0.3.15:9300},},
[2020-08-19T14:31:24,572][INFO ][o.e.c.s.ClusterApplierService] [2NJ7wN5] added {{o-NzQWH}{o-NzQWHLSvCaeaK1--F6g}{c0KukbiYQSi_LxIvNBGgfw}{10.0.3.15}{10.0.3.15:9300},}, reason: apply cluster state (from master [master {2NJ7wN5}{2NJ7wN5zRGmUGdU0H0O6nQ}{ejfNSUaGQ3eOSXi_iST_3Q}{10.0.3.8}{10.0.3.8:9300} committed version [914] source [zen-disco-node-join]]),
[2020-08-19T14:31:45,512][INFO ][o.e.c.r.a.AllocationService] [2NJ7wN5] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.elastichq][0]] ...]).,
can you please let me know , how to correct the dangling warnings.