"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception]" after upgrade ELK

Hi! I am upgrade ELK from 7.2 to 7.9.1.
Kibana is unable to connect to ES

After upgrade kibana had two indexes:

curl https://localhost:9200/_cat/indices/*?v&s=index

health status index                                                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1                                                          70vM6TvWRD-gkYwsH_pfsQ   1   0          0            0       208b           208b
green  open   .kibana_2                                                          ej6zN6GgQJSFKOQDglj-xg   1   0       2424           96      2.1mb          2.1mb
green  open   .kibana_task_manager_1                                             BMD5VAJ6SFy-IeBJwLzoAA   1   0          0            0       208b           208b
green  open   .kibana_task_manager_2                                             w7yLj54JRd6-8_bzNyX70g   1   0          2            0     31.9kb         31.9kb

i have tried, and restarting:

curl -XDELETE http://localhost:9200/.kibana*

But same error happens over and over.

result:

health status index                                                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana_1                                                          bFi-5a5RTwqGWklNRxO2DA   1   0                                                  
yellow open   .kibana_task_manager_1                                             MQ0Gr6pHT12TRFyb-JYxfQ   1   0                                                  

Logs:

kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:21Z","tags":["info","savedobjects-service"],"pid":8,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:21Z","tags":["info","savedobjects-service"],"pid":8,"message":"Starting saved objects migrations"}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:21Z","tags":["info","savedobjects-service"],"pid":8,"message":"Creating index .kibana_task_manager_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:21Z","tags":["info","savedobjects-service"],"pid":8,"message":"Creating index .kibana_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:51Z","tags":["warning","savedobjects-service"],"pid":8,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:53Z","tags":["warning","savedobjects-service"],"pid":8,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/MQ0Gr6pHT12TRFyb-JYxfQ] already exists, with { index_uuid=\"MQ0Gr6pHT12TRFyb-JYxfQ\" & index=\".kibana_task_manager_1\" }"}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:53Z","tags":["warning","savedobjects-service"],"pid":8,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:53Z","tags":["warning","savedobjects-service"],"pid":8,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/bFi-5a5RTwqGWklNRxO2DA] already exists, with { index_uuid=\"bFi-5a5RTwqGWklNRxO2DA\" & index=\".kibana_1\" }"}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:16:53Z","tags":["warning","savedobjects-service"],"pid":8,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

I have changed elasticsearch.requestTimeout in config file kibana.yml

elasticsearch.requestTimeout: 70000

It works:

kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:32:32Z","tags":["info","savedobjects-service"],"pid":9,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:32:32Z","tags":["info","savedobjects-service"],"pid":9,"message":"Starting saved objects migrations"}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:32:32Z","tags":["info","savedobjects-service"],"pid":9,"message":"Creating index .kibana_task_manager_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:32:32Z","tags":["info","savedobjects-service"],"pid":9,"message":"Creating index .kibana_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:33:02Z","tags":["info","savedobjects-service"],"pid":9,"message":"Pointing alias .kibana_task_manager to .kibana_task_manager_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:33:02Z","tags":["info","savedobjects-service"],"pid":9,"message":"Pointing alias .kibana to .kibana_1."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:33:23Z","tags":["info","savedobjects-service"],"pid":9,"message":"Finished in 51281ms."}
kibana_1             | {"type":"log","@timestamp":"2020-09-25T17:33:23Z","tags":["info","savedobjects-service"],"pid":9,"message":"Finished in 51389ms."}

Hi @bognad

I'm glad to hear that you found a way to fix it. Usually default timeout of 30000 is good enough, but it depends on where you are hosting your ES and Kibana instances.

You can double check the upgrade flow in the official documentation

Regards, Dzmitry

1 Like

Tnx Dzmitry!
I am glad to see Elastic Team Member in my topic))
In next time boot kibana finished at 28000ms.
Maybe, increasing the timeout is only necessary
for migration.