[savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms

Hi, I'am setting up the new ELK (7.7.0) in debian/buster64 OS.

And Kibana is unable to connect to ES. The console log:

log   [13:32:01.586] [info][savedobjects-service] Starting saved objects migrations
log   [13:32:01.603] [info][savedobjects-service] Creating index .kibana_task_manager_1.
log   [13:32:01.605] [info][savedobjects-service] Creating index .kibana_1.
log   [13:32:31.606] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms
log   [13:32:34.120] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/Sq62cOeNQbiFGBDoRNaYjA] already exists, with { index_uuid="Sq62cOeNQbiFGBDoRNaYjA" & index=".kibana_task_manager_1" }
log   [13:32:34.121] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana.
log   [13:32:34.122] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/m2zcJV1gSTKUDbfgadBNuA] already exists, with { index_uuid="m2zcJV1gSTKUDbfgadBNuA" & index=".kibana_1" }
log   [13:32:34.123] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.

I've also tried:

curl -X PUT http://localhost:9200/_all/_settings -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'
{"acknowledged":true}
curl -XDELETE http://localhost:9200/.kibana*
{"acknowledged":true}

and restarting but it still doesn't work, shows the same errors.

Also:

curl http://localhost:9200/_cat/indices/*?v&s=index
health status index                    uuid                   pri rep docs.count docs.deleted store.size pri.store.size
red    open   .apm-custom-link         0iXqzwytR4uLEWTjK_iLOQ   1   1
red    open   .kibana_task_manager_1   5ZG9AKXLSn2lkfGpy7UqXw   1   1
red    open   .apm-agent-configuration QS5piFfbRh2ExdO5hzFIOQ   1   1
red    open   .kibana_1                g_zCF7z7R5aqiJXp1sin8Q   1   1

Kibana uses two indices for it's "saved objects" .kibana_n and .kibana_task_manager_n (where n is any positive number).

When multiple Kibana's are started at the same time they will all try to create this index, but some will fail (because the other instance already created that index). If this happens the node will go into a polling loop waiting for the other node to finish the migration.

In your case, it looks like Kibana might have tried to create an index, and that request might have succeeded, but because of the timeout, Kibana didn't know this succeeded. So the node might have fallen back into a polling loop, waiting for another instance to complete it, but there was no other instance busy with the migration.

Deleting all the Kibana indices like you did will allow Kibana to attempt the migration again

curl -XDELETE http://localhost:9200/.kibana*

If the same error happens over and over, it might be that something in your ES cluster is preventing Kibana from creating a new index within 30s. Can you share your ES and Kibana logs after deleting all the .kibana* indices?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.