--------------------------------------------------- 数据节点日志 -------------------------------------------------------
[2023-06-26T10:15:28,625][WARN ][r.suppressed ] [datanode-17] path: /_bulk, params: {timeout=1m}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:177) ~[elasticsearch-8.7.1.jar:?]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:668) ~[elasticsearch-8.7.1.jar:?]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:541) ~[elasticsearch-8.7.1.jar:?]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.7.1.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:577) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:891) ~[elasticsearch-8.7.1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
at java.lang.Thread.run(Thread.java:1623) ~[?:?]
[2023-06-26T10:17:33,822][INFO ][o.e.c.c.Coordinator ] [datanode-17] [3] consecutive checks of the master node [{masternode-3}{07_8X2suTImYUntM9TcQDA}{-X019KPMQsiKqYu4rCRSMQ}{masternode-3}{10.0.3.131}{10.0.3.131:9300}{m}{8.7.1}] were unsuccessful ([3] rejected, [0] timed out), restarting discovery; more details may be available in the master node logs [last unsuccessful check: rejecting check since [{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1}] has been removed from the cluster]
[2023-06-26T10:17:43,828][WARN ][o.e.c.c.ClusterFormationFailureHelper] [datanode-17] master not discovered yet: have discovered [{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1}, {masternode-1}{xTfyMewmQgWSlrDmWAvb-w}{8KyR19XyS-aIWc754UvviA}{masternode-1}{10.0.3.129}{10.0.3.129:9300}{m}{8.7.1}, {masternode-2}{Xxc8kOMTRvyghyv0a-7woA}{h-M2ZrZgSpu01ROd7_ruZQ}{masternode-2}{10.0.3.130}{10.0.3.130:9300}{m}{8.7.1}, {masternode-3}{07_8X2suTImYUntM9TcQDA}{-X019KPMQsiKqYu4rCRSMQ}{masternode-3}{10.0.3.131}{10.0.3.131:9300}{m}{8.7.1}]; discovery will continue using [10.0.3.129:9300, 10.0.3.130:9300, 10.0.3.131:9300] from hosts providers and [{masternode-1}{xTfyMewmQgWSlrDmWAvb-w}{8KyR19XyS-aIWc754UvviA}{masternode-1}{10.0.3.129}{10.0.3.129:9300}{m}{8.7.1}, {masternode-2}{Xxc8kOMTRvyghyv0a-7woA}{h-M2ZrZgSpu01ROd7_ruZQ}{masternode-2}{10.0.3.130}{10.0.3.130:9300}{m}{8.7.1}, {masternode-3}{07_8X2suTImYUntM9TcQDA}{-X019KPMQsiKqYu4rCRSMQ}{masternode-3}{10.0.3.131}{10.0.3.131:9300}{m}{8.7.1}] from last-known cluster state; node term 11, last-accepted version 78576 in term 11; joining [{masternode-3}{07_8X2suTImYUntM9TcQDA}{-X019KPMQsiKqYu4rCRSMQ}{masternode-3}{10.0.3.131}{10.0.3.131:9300}{m}{8.7.1}] in term [11] has status [waiting for local cluster applier] after [10s/10005ms]; for troubleshooting guidance, see Troubleshooting discovery | Elasticsearch Guide [8.7] | Elastic
----------------------------------------------------- master节点日志 -----------------------------------------------------
[2023-06-26T10:17:27,820][INFO ][o.e.c.s.MasterService ] [masternode-3] node-left[{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1} reason: lagging], term: 11, version: 78577, delta: removed {{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1}}
[2023-06-26T10:17:27,868][INFO ][o.e.c.s.ClusterApplierService] [masternode-3] removed {{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1}}, term: 11, version: 78577, reason: Publication{term=11, version=78577}
[2023-06-26T10:17:27,870][INFO ][o.e.c.c.NodeLeftExecutor ] [masternode-3] node-left: [{datanode-17}{S2hINripQFSzXpsg7OQlkw}{OuVvZfdzQb-qUfZ9jZnbwA}{datanode-17}{10.0.3.123}{10.0.3.123:9301}{dr}{8.7.1}] with reason [lagging]
Welcome to our community!
Please don't just post a log excerpt with no other context. It's impossible to assist you with the above as it tells us nothing about what you want to do.
中文提问与讨论 - Discuss the Elastic Stack may also be more suitable as well.
很抱歉,因为网络原因,问题没有说明就提交了。
在这里说明一下:
es版本是8.7.1,数据节点会经常因为滞后而掉出集群,但是在掉出集群期间网络良好,各节点间通讯良好,不知道是何种原因造成这种现象,还请您分析
提前感谢!!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.