Upgraded from 6.4.2 to 6.5.0. Says "Kibana server is not ready yet"


(Jacques Clement) #1

Hi did my rolling upgrade this morning: Logstahs, then Elastic then Kibana. Everything went well and cluster (3 nodes) status is green.

Now when starting Kibana, it says in the browser:
Kibana server is not ready yet

It's been doing that for 1/2 hour now.

Looking at the logs I see:

nov. 16 06:57:24 uatelastic_client1 kibana[2077]: {"type":"log","@timestamp":"2018-11-16T06:57:24Z","tags":["warning","migrations"],"pid":2077,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}

I did exactly what is suggested but it did not help. Same problem remains.

I tried a second time and then had the same issue with index .kibana_1.
Any thoughts?


(Tyler Smalley) #2

According to the message you will need to remove .kibana_2 as well.


(Martin Neiiendam) #3

I'm facing the exact same issue. Have tried deleting .kibana_2 and restarting. No luck.
Next time it said to delete .kibana_1.

I have shut down all kibana instances, and I'm just working on this one.

Help?


(Jacques Clement) #4

You remove .kibana_2 and then it asks to remove .kibana_1, and so on. I found the answer to my problem and it is a known issue - will paste the link to the workaround a bit later


(Jacques Clement) #5

Here's how to fix this (it worked for me):
https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0

Delete the indices:
DELETE {{url}}:{{elastic_api_port}}/.kibana_1
DELETE {{url}}:{{elastic_api_port}}/.kibana_2

Create the role:
POST {{url}}:{{elastic_api_port}}/_xpack/security/role/fix_kibana_65
{
"cluster": ["all"],
"indices":
[
{
"names": [ ".tasks" ],
"privileges": ["create_index", "create", "read"]
}
]
}

Create the user:
POST {{url}}:{{elastic_api_port}}/_xpack/security/user/fixkibana65
{
"password" : "your_password_here",
"roles" : [ "kibana_system", "fix_kibana_65" ],
"full_name" : "Fix Kibana",
"email" : "abc@email.com"
}

Then use this user in kibana.yml


(Len Rugen) #6

I'm having the same issue, but have xpack.security.enabled: false in kibana.yml


(John) #7

I did as you said, but still getting the same loop of annoyance.


(John) #8
indent prefor{"type":"log","@timestamp":"2018-11-19T18:02:21Z","tags": 
         ["reporting","warning"],"pid":1602,"message":"Enabling the Chromium sandbox provides an 
         additional layer of protection."}
         {"type":"log","@timestamp":"2018-11-19T18:02:21Z","tags": 
         ["info","migrations"],"pid":1602,"message":"Creating index .kibana_1."}
         {"type":"log","@timestamp":"2018-11-19T18:02:51Z","tags": 
         ["status","plugin:spaces@6.5.0","error"],"pid":1602,"state":"red","message":"Status changed from 
         yellow to red - Request Timeout after 30000ms","prevState":"yellow","prevMsg":"Waiting for  
         Elasticsearch"}
        {"type":"error","@timestamp":"2018-11-19T18:02:51Z","tags": 
       ["fatal","root"],"pid":1602,"level":"fatal","error":{"message":"Request Timeout after 
         30000ms","name":"Error","stack":"Error: Request Timeout after 30000ms\n    at 
       /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n    at Timeout. 
       <anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n    at 
        ontimeout (timers.js:498:11)\n    at tryOnTimeout (timers.js:323:5)\n    at Timer.listOnTimeout 
       (timers.js:290:5)"},"message":"Request Timeout after 30000ms"} 
       {"type":"log","@timestamp":"2018-11-19T18:02:59Z","tags": 
       ["status","plugin:kibana@6.5.0","info"],"pid":1615,"state":"green","message":"Status changed from 
       uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
       matted text by 4 spaces

(John) #9
    {"type":"log","@timestamp":"2018-11-19T18:03:01Z","tags": 
    ["info","migrations"],"pid":1615,"message":"Creating index .kibana_1."}
    {"type":"log","@timestamp":"2018-11-19T18:03:01Z","tags": 
    ["warning","migrations"],"pid":1615,"message":"Another Kibana instance appears to be migrating the 
    index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you 
    can get past this message by deleting index .kibana_1 and restarting Kibana."}

(John) #10

same problem reported here


(John) #11

Woops! sorry for the alarm..... If you set back "cluster.routing.allocation.enable": null then all is fine :slight_smile:


(Roger Buchwalder) #12

Thank you miemartien

That worked for me. Even if I had to use curl and -k for the https certs.
Luckly, if you have on the same cluster many kibana instances...

However, are any informations around about the "old" kibana user? Since that one is build in, you can't delete them. Any ideas, how to upgrade?

thanks
rog


(Jacques Clement) #13

I believe you have to keep both users for now. Only when a bug fix comes out (probably with 6.5.1) you can revert back to your former configuration and delete the "fix" user and role.