Error "Kibana server is not ready yet"

Since last night I experience the error message "Kibana server is not ready yet" in the browser when I want to open Kibana.

I've done a lot of research, but I can't find the reason.

I've uploaded the logs at https://www.dropbox.com/s/o0xdiybnv8n2qhd/kibana.zip?dl=0. In the kibana.log following looks strange:

{"type":"log","@timestamp":"2018-11-15T09:49:47Z","tags":["license","info","xpack"],"pid":3325,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"}
{"type":"log","@timestamp":"2018-11-15T09:49:48Z","tags":["reporting","warning"],"pid":3325,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
{"type":"log","@timestamp":"2018-11-15T09:49:48Z","tags":["info","migrations"],"pid":3325,"message":"Creating index .kibana_2."}

{"type":"error","@timestamp":"2018-11-15T10:01:07Z","tags":["warning","stats-collection"],"pid":3325,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.5.0]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.5.0]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.5.0\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.5.0]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.5.0]: routing [null]]\\\"},\\\"status\\\":503}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n    at emitNone (events.js:111:20)\n    at IncomingMessage.emit (events.js:208:7)\n    at endReadableNT (_stream_readable.js:1064:12)\n    at _combinedTickCallback (internal/process/next_tick.js:138:11)\n    at process._tickCallback (internal/process/next_tick.js:180:9)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.5.0]: routing [null]]"}
{"type":"log","@timestamp":"2018-11-15T10:01:07Z","tags":["warning","stats-collection"],"pid":3325,"message":"Unable to fetch data from kibana_settings collector"}

The status of elasticsearch is:

{
    "name": "mX--zzP",
    "cluster_name": "feondi",
    "cluster_uuid": "43eRvFDTTp6xxzqMdz-xLw",
    "version": {
        "number": "6.5.0",
        "build_flavor": "default",
        "build_type": "deb",
        "build_hash": "816e6f6",
        "build_date": "2018-11-09T18:58:36.352602Z",
        "build_snapshot": false,
        "lucene_version": "7.5.0",
        "minimum_wire_compatibility_version": "5.6.0",
        "minimum_index_compatibility_version": "5.0.0"
    },
    "tagline": "You Know, for Search"
}

Can you please help me to get Kibana up and running again?

1 Like

Hey @litti,

Hmm, what I see in the logs is This version of Kibana requires Elasticsearch v6.5.0 on all nodes. I found the following incompatible nodes in your cluster: v6.4.0 @ 127.0.0.1:9200 (127.0.0.1), so that doesn't allow Kibana to work properly. Are you sure you don't have some 6.4.0 node somewhere running within the same cluster?

Best,
Oleg

1 Like

I have the same issue and I have updated the whole stack. #
According to my logs and case - the issue with Elasticsearch is an outdated plugging that wasn't updated "on the fly" the ingest-geoip
so you can take in consideration what to do - update it or delete it. and try again. . .

1 Like

Hi Oleg,
thanks a lot for looking into this. Yes, there is only one node (localhost) in the cluster. That has version 6.5.0

But creating a fresh empty log lead me to the solution: There was a message "Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."
So I've done this and everything is well again :slight_smile:

Thanks a lot!

3 Likes

Hi Carlos,
thanks a lot for looking into this!

I've tried this but there is no plugin installed ("elasticsearch-plugin" gives an empty list).

But I found the solution looking at an empty log - there was an index .kibana_2 which locked the migration.

2 Likes

I'm having the same problem, but deleting .kibana_2 didn't help - Kibana starts up and seems to hang after the logline stating that it's reindexing into .kibana_2 - any suggestions?

edit: no es or kibana plugins installed, standard set of logstash plugins installed.

1 Like

How is that can be? Both "Elasticsearch" (the one that allows Kibana to communicate with ES) and "Kibana" (Discover, Dashboard and other Kibana app parts) plugins are installed by default, you can't "uninstall" them.

1 Like

What I mean is, "elasticsearch-plugin list" returns nothing, so I can't upgrade out of date plugins because there aren't any. In any case, release notes say this:

Kibana gets stuck when upgrading from an older version

After upgrading from an older version of Kibana while using X-Pack security, if you get a permission error when you start Kibana for the first time, do the following steps to recover:

Stop Kibana
Delete the .kibana_1 and .kibana_2 indices that were created
Create a new role in Elasticsearch that has the all permission for the .tasks index
Create a new user in Elasticsearch that has the kibana_system role as well as the new role you just created
Update elasticsearch.username and elasticsearch.password in kibana.yml with the details from that new user
Start Kibana

This will be fixed in a future bug fix release, at which time you can go back to using the built-in kibana user.

So I'm going to work through this, I think this is my problem.

edit:

Solved

What I ended up doing was rolling back to 6.4.0, creating the new role and the new user as described above, and then upgraded to 6.5.0 using the repo. No more hanging on Kibana startup and it's successfully reindexed .kibana:

{"type":"log","@timestamp":"2018-11-15T13:37:14Z","tags":["info","migrations"],"pid":15040,"message":"Creating index .kibana_2."}
{"type":"log","@timestamp":"2018-11-15T13:37:15Z","tags":["info","migrations"],"pid":15040,"message":"Reindexing .kibana to .kibana_1"}
{"type":"log","@timestamp":"2018-11-15T13:37:16Z","tags":["info","migrations"],"pid":15040,"message":"Migrating .kibana_1 saved objects to .kibana_2"}
{"type":"log","@timestamp":"2018-11-15T13:37:16Z","tags":["info","migrations"],"pid":15040,"message":"Pointing alias .kibana to .kibana_2."}
{"type":"log","@timestamp":"2018-11-15T13:37:16Z","tags":["info","migrations"],"pid":15040,"message":"Finished in 1983ms."}
{"type":"log","@timestamp":"2018-11-15T13:37:16Z","tags":["listening","info"],"pid":15040,"message":"Server running at http://0.0.0.0:5601"}
1 Like

FWIW, I'm having the same issue, but downgrading to 6.4.3 didn't work for me.

I keep getting the error message: plugin:security@6.4.3 Privileges are missing and can't be removed, currently.

Since I'm very new with the ELK stack I guess I have to look into how to create users and roles manually in ES next....

I'm trying to do that using

curl -X POST "10.42.10.204:9200/_xpack/security/role/tasks_bugfix" -H 'Content-Type: application/json' -d'
{
  "indices": [
    {
      "names": [ ".tasks" ],
      "privileges": [ "all" ],
    }
  ]
}
'

But that results in No handler found for uri [/_xpack/security/role/tasks_bugfix] and method [POST]?

1 Like

I had the exact same problem after upgrading all Elastic components in my Stack.
Since it is more a "test environment", the simplest solution for me was deleting all indicies older than 6.4.2

1 Like

I've been told that this https://github.com/elastic/kibana/pull/24873 will fix this downgrading issue. But for now, when you downgrade, you need to manually delete the space specific Kibana "application privileges", e.g.:

curl -u elastic:changeme -XDELETE localhost:9200/_xpack/security/privilege/kibana-.kibana/space_all
curl -u elastic:changeme -XDELETE localhost:9200/_xpack/security/privilege/kibana-.kibana/space_read
1 Like

I'm getting a similar No handler found for uri [/_xpack/security/privilege/kibana-.kibana/space_all] and method [DELETE] as above. I do have to connect to one ES node, right?

How would I - instead of downgrading - do the two steps listed in the Release notes?

Create a new role in Elasticsearch that has the all permission for the .tasks index
Create a new user in Elasticsearch that has the kibana_system role as well as the new role you just created

1 Like

I successfully downgraded by renaming kibana index. In kibana config changed from .kibana to .kibana2, then used below to move data from old index to new:
POST _reindex?wait_for_completion=true
{
"source": {
"index": ".kibana"
},
"dest": {
"index": ".kibana2"
}
}

1 Like

Ok, I was able to do the two steps using the Cerebro rest interface (I guess I'm doing something wrong with curl to get the No handler found error - I noticed in Cerebro that without the beginning / it works):

path: _xpack/security/role/tasks_bugfix

{
  "indices": [
    {
      "names": [ ".tasks" ],
      "privileges": [ "all" ]
    }
  ]
}

path: _xpack/security/user/user_bugfix

{
  "password" : "CHANGETHIS",
  "full_name" : "User Bugfix",
  "email" : "email@example.com",
  "roles" : [ "kibana_system", "tasks_bugfix" ]
}

Next I replaced elasticsearch.username and elasticsearch.password in /etc/kibana/kibana.yml with user_bugfix and CHANGETHIS, made sure that the two kibana_1 and kibana_2 indices were gone and started kibana.

Looks like it's working now.

Now I guess I need to wait for the bugfix release to remove the extra role and user and switch back to the old one.

1 Like

Hi,

Today also i updated kibana elasticsearch and logstash from 6.4 to 6.5 and i have exactly the same problem. Kibana cannot start and i am getting the same error "Kibana server is not ready yet"

Please can you help me to overcome this? How i can check the logs and what steps i have to do.
I have installed them all to a Centos 7 machine.

Best Regards,
Thanos

1 Like

I solved this problem by doing this

curl -XDELETE http://localhost:9200/.kibana

curl -XDELETE http://localhost:9200/.kibana*

curl -XDELETE http://localhost:9200/.kibana_2

curl -XDELETE http://localhost:9200/.kibana_1

9 Likes

The proper way is as described in the Release Notes

  1. Stop Kibana
    This depends on your operating system

  2. Delete the .kibana_1 and .kibana_2 indices that were created
    I used the Cerebro frontend to do that, because I keep having troubles with curl. But it should be possible to delete them using (localhost being an Elasticsearch node).

curl -XDELETE http://localhost:9200/.kibana_1
curl -XDELETE http://localhost:9200/.kibana_2
  1. Create a new role in Elasticsearch that has create_index , create , and read permissions for the .tasks index
    As I wrote above, curl didn't work, so I did this in the Cerebro rest interface:

path: _xpack/security/role/tasks_bugfix

{
  "indices": [
    {
      "names": [ ".tasks" ],
      "privileges": [ "all" ]
    }
  ]
}
  1. Create a new user in Elasticsearch that has the kibana_system role as well as the new role you just created (note: CHANGETHIS)

path: _xpack/security/user/user_bugfix

{
  "password" : "CHANGETHIS",
  "full_name" : "User Bugfix",
  "email" : "email@example.com",
  "roles" : [ "kibana_system", "tasks_bugfix" ]
}
  1. Update elasticsearch.username and elasticsearch.password in kibana.yml with the details from that new user
    The location of the kibana.yml depends on your installation

  2. If using a Kibana secure settings keystore, remove keys elasticsearch.username and elasticsearch.password from the keystore using the kibana-keystore tool. Add these keys back to the keystore using the new user and password as values.
    Since I don't use keystore yet, I can't comment on this, but the description seems good enough.

  3. Start Kibana This will be fixed in a future bug fix release, at which time you can go back to using the built-in kibana user.
    As above this depends on your system. After this bug is fixed in a later release you can undo all these changes.

2 Likes

Restarting all of the services fixed it for me.

sudo systemctl restart logstash elasticsearch kibana

And then if it still doesn't work, give Kibana one more restart and give it a little time.

sudo systemctl restart kibana

Check :slight_smile:

4 Likes

Is .kibana index real name? if it is alias name, index migration will fail.

1 Like

I just deleted both kibana_1 and kibana_2, but now my old index patterns, visualizations, everything is missing. Is there any way to get all that back?

2 Likes