Kibana server is not ready yet - Fresh install on CentOS 8

Hi Folks,

I also recieve this error: Kibana server is not ready yet

Tutorial I'm following:

Java 11
NGINX as Reverse Proxy login is working fine
Elasticsearch is running
Logstash is running

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2748/sshd           
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      8886/node           
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2770/nginx: master  
tcp6       0      0 :::22                   :::*                    LISTEN      2748/sshd           
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      2710/java           
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      8759/java           
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      8759/java           
tcp6       0      0 :::5044                 :::*                    LISTEN      2710/java           
udp        0      0 192.168.12.86:68        0.0.0.0:*                           2719/NetworkManager 

elasticsearch.yml

cluster.name: ELK
node.name: svgXXX-elk-01
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 127.0.0.1
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]

For your information: I just have 1 server and I will not have a clust since it's not necessary.

kibana.yml

server.host: "127.0.0.1"
server.name: "SVGWMA-ELK-01"
elasticsearch.hosts: ["http://127.0.0.1:9200"]

Log var/log/elasticsearch/elk.log:

ing [[::1]:9300] from hosts providers and [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16
648409088, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-12-08T18:46:32,816][WARN ][o.e.c.c.ClusterFormationFailureHelper] [svgtest-elk-01] master not discovered yet, this node has not previously joined a bootstr
apped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}
{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16648409088, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue us
ing [[::1]:9300] from hosts providers and [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16
648409088, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-12-08T18:46:40,895][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [svgtest-elk-01] timed out while retrying [indices:admin/get] after failure (timeout [30s])
[2019-12-08T18:46:40,896][WARN ][r.suppressed ] [svgtest-elk-01] path: /.kibana, params: {index=.kibana}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:218) [elasticsearch-7.
5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) [elasticsearch-7.5.0.jar:7.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2019-12-08T18:46:40,899][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [svgtest-elk-01] timed out while retrying [indices:admin/get] after failure (timeout [30s])
[2019-12-08T18:46:40,900][WARN ][r.suppressed ] [svgtest-elk-01] path: /.kibana_task_manager, params: {index=.kibana_task_manager}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:218) [elasticsearch-7.
5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) [elasticsearch-7.5.0.jar:7.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2019-12-08T18:46:40,899][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [svgtest-elk-01] timed out while retrying [indices:admin/get] after failure (timeout [30s])
[2019-12-08T18:46:40,900][WARN ][r.suppressed ] [svgtest-elk-01] path: /.kibana_task_manager, params: {index=.kibana_task_manager}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:218) [elasticsearch-7.
5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.5.0.jar:7.5.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703) [elasticsearch-7.5.0.jar:7.5.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:830) [?:?]
[2019-12-08T18:46:42,819][WARN ][o.e.c.c.ClusterFormationFailureHelper] [svgtest-elk-01] master not discovered yet, this node has not previously joined a bootstr
apped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}
{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16648409088, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue us
ing [[::1]:9300] from hosts providers and [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16
648409088, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-12-08T18:46:43,413][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [svgtest-elk-01] no known master node, scheduling a retry
[2019-12-08T18:46:43,419][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [svgtest-elk-01] no known master node, scheduling a retry
[2019-12-08T18:46:52,822][WARN ][o.e.c.c.ClusterFormationFailureHelper] [svgtest-elk-01] master not discovered yet, this node has not previously joined a bootstr
apped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}
{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16648409088, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue us
ing [[::1]:9300] from hosts providers and [{svgtest-elk-01}{dmsezsMiS8SxcL5vhh7Cqg}{QmCLCdeZQLKa0kVtJQxz4g}{127.0.0.1}{127.0.0.1:9300}{dilm}{ml.machine_memory=16
648409088, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

I Thank you for every reply and help.

If this is a single node cluster, then node.name and cluster.initial_master_nodes should be the same. The svgxxx.... node is waiting to discover the specified initial master node "node-1" and can't.

1 Like

Thank you yes this correct and was the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.