Getting 'Kibana server is not ready yet' after upgrade ELK stack

I did a 'apt-get upgrade' on the ubuntu server hosting the ELK stack. After that my Kibana page stopped loading and keeps showing 'Kibana server is not ready yet' message.

kibana, elasticsearch and logstash services are all active and green. No errors in kibana.stderr. And I can curl http://localhost:9200 and get a proper response as below

     "name" : "xrsm0UW",
     "cluster_name" : "elasticsearch",
     "cluster_uuid" : "HW0C3VYoQoqzywhskvKuPQ",
     "version" : {
       "number" : "6.8.0",
       "build_flavor" : "default",
       "build_type" : "deb",
       "build_hash" : "65b6179",
       "build_date" : "2019-05-15T20:06:13.172855Z",
       "build_snapshot" : false,
       "lucene_version" : "7.7.0",
       "minimum_wire_compatibility_version" : "5.6.0",
       "minimum_index_compatibility_version" : "5.0.0"
     },
     "tagline" : "You Know, for Search"
   }

In Elasticsearch.log it has these warnings

    [2019-05-30T23:55:47,920][WARN ][o.e.m.j.JvmGcMonitorService] [xrsm0UW] [gc][37170] overhead, spent [1.8s] collecting in the last [1.9s]
    [2019-05-30T23:55:50,844][WARN ][o.e.m.j.JvmGcMonitorService] [xrsm0UW] [gc][37171] overhead, spent [2.8s] collecting in the last [2.9s]
    [2019-05-30T23:56:00,058][WARN ][o.e.m.j.JvmGcMonitorService] [xrsm0UW] [gc][37179] overhead, spent [2s] collecting in the last [2.2s]
    [2019-05-30T23:56:01,991][WARN ][o.e.m.j.JvmGcMonitorService] [xrsm0UW] [gc][37180] overhead, spent [1.8s] collecting in the last [1.9s]
    [2019-05-30T23:56:06,990][WARN ][o.e.m.j.JvmGcMonitorService] [xrsm0UW] [gc][37184] overhead, spent [1.8s] collecting in the last [1.9s]

In Kibana.stdout I have the following

{"type":"log","@timestamp":"2019-05-28T04:11:21Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":3973,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-28T04:11:21Z","tags":["status","plugin:timelion@6.2.3","info"],"pid":3973,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-28T04:11:21Z","tags":["status","plugin:console@6.2.3","info"],"pid":3973,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-28T04:11:21Z","tags":["status","plugin:metrics@6.2.3","info"],"pid":3973,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-05-28T04:11:21Z","tags":["fatal"],"pid":3973,"message":"Port 5601 is already in use. Another instance of Kibana may be running!"}

It's interesting to notice, the above output has elasticsearch@6.2.3 instead of the upgraded version of 6.8.0. Could this be my problem?

That seems more relevant.

Is this an indication that the old version of Kibana is somehow running and conflicting with the newly installed version?

When checking the plugins with the newer version of Kibana, I can't see anything under the folder /usr/share/kibana/plugins

What does lsof -i 5601 return?

I got this

COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
node    1541 kibana   18u  IPv4  17332      0t0  TCP elk01.contoso.int:5601 (LISTEN)

Ok, so you may want to check what that PID is about, is it an older version?

Looks like it's running from the newer version

UID        PID  PPID  C STIME TTY          TIME CMD
kibana    1541     1  0 May30 ?        00:05:30 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Ok, try stopping and restarting the process.

Restarted the service and getting the same result. I had another look of the kibana.stdoutlog, it appears to be from a date before the upgrade. So I don't think it's related to my current problem.

● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2019-05-31 01:00:53 UTC; 3min 49s ago
 Main PID: 7846 (node)
    Tasks: 11
   Memory: 255.1M
      CPU: 12.518s
   CGroup: /system.slice/kibana.service
           └─7846 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:beats_management@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","p
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:index_management@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","p
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:index_lifecycle_management@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green -
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:rollup@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","prevState":
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:remote_clusters@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","pr
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:cross_cluster_replication@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green -
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:reporting@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","prevStat
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:security@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","prevState
May 31 01:04:27 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:27Z","tags":["status","plugin:maps@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","prevState":"r
May 31 01:04:31 ip-10-1-1-10 kibana[7846]: {"type":"log","@timestamp":"2019-05-31T01:04:31Z","tags":["status","plugin:elasticsearch@6.8.0","info"],"pid":7846,"state":"green","message":"Status changed from red to green - Ready","prev

I suspect this is the culprit

The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable.

Tried with removing the old kibana index from Elasticsearch and restart... No luck still.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.