Hi did my rolling upgrade this morning: Logstahs, then Elastic then Kibana. Everything went well and cluster (3 nodes) status is green.
Now when starting Kibana, it says in the browser: Kibana server is not ready yet
It's been doing that for 1/2 hour now.
Looking at the logs I see:
nov. 16 06:57:24 uatelastic_client1 kibana[2077]: {"type":"log","@timestamp":"2018-11-16T06:57:24Z","tags":["warning","migrations"],"pid":2077,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}
I did exactly what is suggested but it did not help. Same problem remains.
I tried a second time and then had the same issue with index .kibana_1.
Any thoughts?
You remove .kibana_2 and then it asks to remove .kibana_1, and so on. I found the answer to my problem and it is a known issue - will paste the link to the workaround a bit later
indent prefor{"type":"log","@timestamp":"2018-11-19T18:02:21Z","tags":
["reporting","warning"],"pid":1602,"message":"Enabling the Chromium sandbox provides an
additional layer of protection."}
{"type":"log","@timestamp":"2018-11-19T18:02:21Z","tags":
["info","migrations"],"pid":1602,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2018-11-19T18:02:51Z","tags":
["status","plugin:spaces@6.5.0","error"],"pid":1602,"state":"red","message":"Status changed from
yellow to red - Request Timeout after 30000ms","prevState":"yellow","prevMsg":"Waiting for
Elasticsearch"}
{"type":"error","@timestamp":"2018-11-19T18:02:51Z","tags":
["fatal","root"],"pid":1602,"level":"fatal","error":{"message":"Request Timeout after
30000ms","name":"Error","stack":"Error: Request Timeout after 30000ms\n at
/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout.
<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at
ontimeout (timers.js:498:11)\n at tryOnTimeout (timers.js:323:5)\n at Timer.listOnTimeout
(timers.js:290:5)"},"message":"Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2018-11-19T18:02:59Z","tags":
["status","plugin:kibana@6.5.0","info"],"pid":1615,"state":"green","message":"Status changed from
uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
matted text by 4 spaces
{"type":"log","@timestamp":"2018-11-19T18:03:01Z","tags":
["info","migrations"],"pid":1615,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2018-11-19T18:03:01Z","tags":
["warning","migrations"],"pid":1615,"message":"Another Kibana instance appears to be migrating the
index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you
can get past this message by deleting index .kibana_1 and restarting Kibana."}
I believe you have to keep both users for now. Only when a bug fix comes out (probably with 6.5.1) you can revert back to your former configuration and delete the "fix" user and role.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.