Kibana server is not ready yet

Hi Team,

My ELK cluster running in one node. while clearing Queue, I had restarted kibana and elk services.

from now my URL is stuck with below error.
Kibana server is not ready yet

In order to fix this i have restarted server and ELK services. Still my home page is getting same error. Kindly help to fix this.

Check is ES up and running:
systemctl status elasticsearch
journalctl -u elasticsearch.service --since "10min ago"

check log: /var/log/elasticsearch/<clustername>.log
curl http://localhost:9200
or curl -k -u elastic:<pass> https://localhost:9200

Check Kibana status:
systemctl status kibana
journalctl -u kibana.service --since "10min ago"
There should be a trace in Kibana log
/var/log/kibana/kibana.log

Hi @Rios/Team Thanks for your prompt reply.

I have tried to check the service status for Kibana , elasticsearch and logstash. its running fine.

Please check below service status.

kibana service status

sudo service kibana status
● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-07-23 22:25:55 IST; 13min ago
   Main PID: 5447 (node)
      Tasks: 11 (limit: 19155)
     Memory: 156.4M
     CGroup: /system.slice/kibana.service
             └─5447 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist

Jul 23 22:39:22 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T17:09:22Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:39:24 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T17:09:24Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>

>> 
journalctl -u kibana.service --since "10min ago"
-- Logs begin at Wed 2023-07-05 12:15:27 IST, end at Sun 2023-07-23 22:37:51 IST. --
Jul 23 22:27:55 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:57:55Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:27:57 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:57:57Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:28:00 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:58:00Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:28:02 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:58:02Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:28:05 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:58:05Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>
Jul 23 22:28:07 elk kibana[5447]: {"type":"log","@timestamp":"2023-07-23T16:58:07Z","tags":["error","elasticsearch","data"],"pid":5447,"message":"[search_phase_e>

**elasticsearch service status as below **

sudo systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
     Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-07-23 22:24:54 IST; 16min ago
       Docs: https://www.elastic.co
   Main PID: 5058 (java)
      Tasks: 132 (limit: 19155)
     Memory: 8.4G
     CGroup: /system.slice/elasticsearch.service
             ├─5058 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreT>
             └─5257 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Jul 23 22:24:41 elk systemd[1]: Starting Elasticsearch...
Jul 23 22:24:54 elk systemd[1]: Started Elasticsearch.

Elasticsearch logs.

[2020-12-16T07:58:27,033][INFO ][o.e.c.r.a.AllocationService] [elk] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[metricbeat-7.10.0-2020.12.16-000001][0]]]).
[2020-12-16T08:01:11,546][INFO ][o.e.c.m.MetadataMappingService] [elk] [crm-server-ip-10-Q4wVr9QsyRIg3iyYaH7Q] update_mapping [_doc]
[2020-12-16T08:02:58,091][INFO ][o.e.n.Node ] [elk] stopping ...
[2020-12-16T08:02:58,121][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [elk] [controller/293902] [Main.cc@154] ML controller exiting
[2020-12-16T08:02:58,121][INFO ][o.e.x.m.p.NativeController] [elk] Native controller process has stopped - no new native processes can be started
[2020-12-16T08:02:58,123][INFO ][o.e.x.w.WatcherService ] [elk] stopping watch service, reason [shutdown initiated]
[2020-12-16T08:02:58,125][INFO ][o.e.x.w.WatcherLifeCycleService] [elk] watcher has stopped and shutdown
[2020-12-16T08:02:59,343][INFO ][o.e.n.Node ] [elk] stopped
[2020-12-16T08:02:59,343][INFO ][o.e.n.Node ] [elk] closing ...
[2020-12-16T08:02:59,363][INFO ][o.e.n.Node ] [elk] closed

Since "queue is full" error i got. i was unable to login kibana console. so i have tried some of the indices.

Once indices deleted i have restareted ELK services. since i restarted i'm getting "kibana getting ready error "

Kindly let me know further details to debug this.

also I have see kibana logs not generating with current timestamp.


 ls -lrth
total 0
-rw-r----- 1 kibana kibana 0 Oct 25  2022 kibana.log

"Kibana getting ready error" happens when kibana is experiencing difficulties on start up or initialization.

Do you have firewall between elasticsearch and kibana? But i guess not, there is just one node you have.

Kibana uses an index in Elasticsearch to store its settings and metadata. If this index gets corrupted or is deleted, Kibana may fail to initialize correctly.

Please check user permissions, third party plugins if they are also corrupted and kibana.yml configuration.

I assume you have the same version for kibana and els.

Just to add to Porarcis, check do you have enough disk space.

Hi @Rios @PodarcisMuralis . Thank you for your response.

Yes, there was a space problem earlier. So I removed some indices and attempted to restart the Kibana service.

My ELK console is currently stuck with the error "Kibana server is not ready yet"
My ELK node has enough space now.

I also verified the permissions of the kibana.yml and elasticsearch.yml files. Look down below.

ls -lrth
total 24K
-rw-rw---- 1 root kibana 4.8K Feb 24 09:15 kibana.yml.dpkg-dist
-rw------- 1 root kibana  691 Feb 24 09:15 nohup.out
-rw-r--r-- 1 root kibana  216 Feb 24 09:15 node.options
-rw-rw---- 1 root kibana 5.3K Feb 24 14:15 kibana.yml
root@elk:/etc/kibana# cd /etc/elasticsearch/
root@elk:/etc/elasticsearch# ls -lrth
total 48K
drwxr-s--- 2 elasticsearch elasticsearch 4.0K Oct 16  2020 jvm.options.d
-rw-rw---- 1 elasticsearch elasticsearch 2.4K Feb 24 09:15 jvm.options
-rw-rw---- 1 elasticsearch elasticsearch    0 Feb 24 09:15 users
-rw-rw---- 1 elasticsearch elasticsearch  473 Feb 24 09:15 role_mapping.yml
-rw-rw---- 1 elasticsearch elasticsearch  18K Feb 24 09:15 log4j2.properties
-rw-rw---- 1 elasticsearch elasticsearch    0 Feb 24 09:15 users_roles
-rw-rw---- 1 elasticsearch elasticsearch  197 Feb 24 09:15 roles.yml
-rw-rw---- 1 elasticsearch elasticsearch 3.0K Feb 24 09:15 elasticsearch.yml.save
-rw-rw---- 1 elasticsearch elasticsearch  199 Feb 24 09:15 elasticsearch.keystore
-rw-rw---- 1 elasticsearch elasticsearch 3.3K Jul 23 22:24 elasticsearch.yml

kindly verify above kibana .yml file permissions and suggest to restore kibana back stage.

You must have min 15% free disk space.

  1. Can you access ES by curl http://localhost:9200
    or curl -k -u elastic:<pass> https://localhost:9200
  2. If ES is not available, check log: /var/log/elasticsearch/<clustername>.log, what you have provided is not enough, there is no error. Change rootLogger.level = info to debug in log4j2.properties
  3. You haven't show any info from journal. Restart services and check

There should be a trace in Kibana log.

If you don't have logging, add in kibana.yml

logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging.appenders.default:
  type: file
  fileName: /var/log/kibana/kibana.log
  layout:
    type: pattern

Hi @Rios @PodarcisMuralis

I have 25% free space as of now.

I can access Elastic search url with above curl command.

I have found below ELK logs but unable to get kibana logs tried below suggested way.

Elastic search logs 

[2023-07-24T21:54:37,629][INFO ][o.e.x.s.a.AuthenticationService] [node-1] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
[2023-07-24T21:54:38,538][WARN ][r.suppressed             ] [node-1] path: /.kibana_task_manager/_count, params: {index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:568) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:324) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:603) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:400) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:236) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:303) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
	at java.lang.Thread.run(Thread.java:832) [?:?]

@Rios @PodarcisMuralis Thanks for you support.

Now kibana is back to normal. after deleting .kibana files using below command.

curl --request DELETE 'http://elastic-search-host:9200/.kibana*'

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.