Kibana server is not ready yet

Im getting Kibana server is not ready yet error when Opening Kibana dashboard

After Looking into the logs, I observed below error:

{"type":"log","@timestamp":"2021-01-25T11:11:14Z","tags":["error","elasticsearch","data"],"pid":47885,"message":"[resource_already_exists_exception]: index [.kibana_2/Uw3DuEOERa-r67AHan80oA] already exists"}
            {"type":"log","@timestamp":"2021-01-25T11:11:14Z","tags":["warning","savedobjects-service"],"pid":47885,"message":"Unable to connect to Elasticsearch. Error: resource_already_exists_exception"}
            {"type":"log","@timestamp":"2021-01-25T11:11:14Z","tags":["warning","savedobjects-service"],"pid":47885,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}

My Question is, I have indices for past 30 days which are required for analyzing the logs.
What if I delete .kibana_2 and recreate will preserve the indices and logs?
Am I be able to look after the logs after recreating .kibana_2 indices?

Also, I observed below error whcih could be the reason for Kibana getting down

[\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index](/) [.kibana_1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];[\\\"}],\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index](/) [.kibana_1] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];[\\\"},\\\"status\\\":429}\](/)"}"} '

Could can i start the Kibana again and preserve the indices and logs.

Thanks..!!

Hi @jayanth_m

Looks like your mount point is full , need to clear some space which also resolve your cluster_block_exception
deleting kibana_2 will not impact your data, it will just kill existing kibana thread

Thanks @rohitarorait82

I've cleared the Disk space and already tried restarting the kibana, and found the issue still existed.

i've encountered the similar error last time as well and delete the .kibana_2 index.
unfortunately, I was not able to retrieve the old logs and indices.

So, I've posted here.

@jayanth_m can you please check whether elasticsearch is working or not

Also , please paste output of journalctl -xe | grep 'kibana' commanda

Elastic search is active and running, I've just check with systemctl command.

Unfortunately, I dont have enough permission to run journalctl command on my server :frowning:

@jayanth_m : can you check whether elasticsearch is running from some time or it just getting restarted after few seconds and also have you done any upgrade recently ?

@rohitarorait82 haven't upgraded kibana or elasticsearch recently.

w.r.t elasticsearch, after observing this issue..
I've stopped kibana, and then elasticsearch.
While starting, I've started elasticsearch first and started kibana after few seconds.

@rohitarorait82

After deleting the .kibana_2 and .kibana_task_manager_2 and restarted kibana, but could find the same issue.

{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["info","savedobjects-service"],"pid":21783,"message":"Creating index .kibana_task_manager_2."}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["error","elasticsearch","data"],"pid":21783,"message":"[resource_already_exists_exception]: index [.kibana_task_manager_2/c61zD_haSMKYRclSpxmccA] already exists"}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["warning","savedobjects-service"],"pid":21783,"message":"Unable to connect to Elasticsearch. Error: resource_already_exists_exception"}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["warning","savedobjects-service"],"pid":21783,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 and restarting Kibana."}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["info","savedobjects-service"],"pid":21783,"message":"Detected mapping change in \"properties.originId\""}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["info","savedobjects-service"],"pid":21783,"message":"Detected mapping change in \"properties.originId\""}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["info","savedobjects-service"],"pid":21783,"message":"Creating index .kibana_2."}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["error","elasticsearch","data"],"pid":21783,"message":"[resource_already_exists_exception]: index [.kibana_2/5GMNQUBZSFSW5URwDX8iMQ] already exists"}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["warning","savedobjects-service"],"pid":21783,"message":"Unable to connect to Elasticsearch. Error: resource_already_exists_exception"}
{"type":"log","@timestamp":"2021-01-26T06:02:32Z","tags":["warning","savedobjects-service"],"pid":21783,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}

@rohitarorait82 I think, I've resolved this issue.

There are lot of indices(around 4 months indices) present under the elastic search.
I have cleared all the old indices and left only with last 15 days indices.
And then I have deleted .kibana_2 and .kibana_task_manager_2 and restarted kibana.
It worked then.

Thank you :slight_smile:

/Jayanth

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.