Kibana server is not ready yet issue after upgrade to 6.5.0


(Nuwan Vithanage) #1

Since last night I experience the error message "Kibana server is not ready yet" in the browser when I want to open Kibana. I have upgrade kibana and Elasticsearch to V 6.5.0 .
I've done a lot of research, but I can't find the reason.

[root@915452-IngestionDemo ~]# curl http://172.24.36.216:9200
{
"name" : "915452-IngestionDemo.newfrontierdata.com",
"cluster_name" : "IngestionDemO",
"cluster_uuid" : "9S5ICK_ITiOMhpdz6sGEgQ",
"version" : {
"number" : "6.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "816e6f6",
"build_date" : "2018-11-09T18:58:36.352602Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"

}

{"type":"log","@timestamp":"2018-11-16T09:54:34Z","tags":["warning","stats-collection"],"pid":9031,"message":"Unable to fetch data from canvas collector"}
{"type":"error","@timestamp":"2018-11-16T09:54:34Z","tags":["warning","stats-collection"],"pid":9031,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{\"size\":0,\"query\":{\"terms\":{\"type\":[\"dashboard\",\"visualization\",\"search\",\"index-pattern\",\"graph-workspace\",\"timelion-sheet\"]}},\"aggs\":{\"types\":{\"terms\":{\"field\":\"type\",\"size\":6}}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":,\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":},\"status\":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickCallback (internal/process/next_tick.js:180:9)"},"message":"[search_phase_execution_exception] all shards failed"}
{"type":"log","@timestamp":"2018-11-16T09:54:34Z","tags":["warning","stats-collection"],"pid":9031,"message":"Unable to fetch data from kibana collector"}


(Tek Chand) #2

@Nuwan, Can you please check the kibana server configuration in kibana.yml file?

Have you defined the elasticserach node IP in elasticserach.uri under kibana.yml file?

One more reason can be have you upgraded your all elasticsearch node on version 6.5.0 as well as kibana also on 6.5.0? If any one of them are still on the older version then there might be compatibility issue. So please check it also.

Thanks.


(Nuwan Vithanage) #3

Hi

Thank you very much . your guidance and following link helped to solve that issue
https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0.


(Guillain) #4

Thanks,

FYI with this kind of logs and on my side it was an index issue on .kibana.
Simple way to fix it is to adapt the kibana's index in the kibana.yml file (and back to the original value after have getting back the hand on the GUI.
Ie:
kibana.index: ".newkibana"


Resolved: Old kibana index not working after upgrade to 6.5 (fix was to delete .kibana_2 index)
(Nuwan Vithanage) #5

Thank you .Issue was fix after removing kibana related index and restart the kibana service


(Abdul Jaleel) #6

check for elasticsearch version, if it is not same version as kibana. please update.
I had the same issue, I have upgraded the elasticsearch to 6.5. then after restarting the services. the issue sorted out.

Good Day..


(Nuwan Vithanage) #7

Thank you very much. This issue is fixed now