While spinning up elastic search on ec2 machines with warm nodes, kibana not yet ready message

Hi,

I am new self hosted elastic search, for poc I started spinning the self hosted es with data nodes, client nodes set up worked fine. I added the warm nodes I am seeing message kibana not yet ready.

Elasticsearch.yml

bootstrap.memory_lock: true
node.name: ${HOSTNAME}

action.destructive_requires_name: true
indices.fielddata.cache.size: 1% # default is unbounded
cluster.name: sfhlogging
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
path.data: /opt/elasticsearch/data
path.logs: /var/log/elasticsearch
xpack.security.enabled: false

network.host: _ec2:privateIpv4_,localhost
plugin.mandatory: discovery-ec2
cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone
discovery:
    seed_providers: ec2
    ec2.groups: xxxx
    ec2.host_type: private_ip
    ec2.tag.Cluster: dev-sfhlogging
    ec2.protocol: http # no need in HTTPS for internal AWS calls

    # manually set the endpoint because of auto-discovery issues
    # https://github.com/elastic/elasticsearch/issues/27464
    ec2.endpoint: ec2.us-east-1.amazonaws.com
node.master: false
node.data: false
node.ingest: false

Logs from kibana:

Aug 20 01:12:50 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:50Z","tags":["info","plugins","watcher"],"pid":13380,"message":"Your basic license does not support watcher. Please upgrade your license."}
Aug 20 01:12:50 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:50Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":13380,"message":"Starting monitoring stats collection"}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["info","savedobjects-service"],"pid":13380,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["info","savedobjects-service"],"pid":13380,"message":"Starting saved objects migrations"}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["info","savedobjects-service"],"pid":13380,"message":"Creating index .kibana_task_manager_1."}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["info","savedobjects-service"],"pid":13380,"message":"Creating index .kibana_1."}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["warning","savedobjects-service"],"pid":13380,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/yDv0AjZvTleukS4WfyAnAA] already exists, with { index_uuid=\"yDv0AjZvTleukS4WfyAnAA\" & index=\".kibana_task_manager_1\" }"}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["warning","savedobjects-service"],"pid":13380,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["warning","savedobjects-service"],"pid":13380,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/eKnGmMQwSZuiYcOMNFsrqw] already exists, with { index_uuid=\"eKnGmMQwSZuiYcOMNFsrqw\" & index=\".kibana_1\" }"}
Aug 20 01:12:51 ip-10-128-44-63 kibana[13380]: {"type":"log","@timestamp":"2020-08-20T01:12:51Z","tags":["warning","savedobjects-service"],"pid":13380,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

I deleted kibana indices, restarted the kibana. But kibana not coming back

red open .kibana_task_manager_1 yDv0AjZvTleukS4WfyAnAA 1 1
red open .kibana_1              eKnGmMQwSZuiYcOMNFsrqw 1 1
  1. If we are using ec2 machines for es, in warm architecture how do we get the advantage of warm nodes in terms of pricing, if any one have come a-crossed this scenario. Could you please provide me high level info.

You are seeing errors during saved object migration. Since it looks like you don't have any previous kibana configuration to save, I recommend deleting the .kibana-* indices to clear all the errors. Then restart Kibana.

Hi
Thanks for reply.
I did that so many times. But couldn't help me.

I've already shared the steps I would take to fix this issue, which is to delete all the indexes that are affected. What steps did you take so far?

I deleted the .kibana_task_manager_1, .kibana_1 
curl -XDELETE http://10.128.xx.xx:9200/.kibana_task_manager_1
curl -XDELETE http://10.128.xx.xx:9200/.kibana_1
systemctl restart kibana
curl http://10.128.xx.xx:5601

Kibana server is not ready yet

Try stopping Kibana first, and verify that there are no .kibana indices before starting.

I stopped the kibana.
I checked for .kibana indices. No index is there.
I restarted the kibana.
curl 10.128.xx.xx:9200/_cat/indices
red open .kibana_task_manager_1 XSCzMemTSza0BDR2fdGNtA 1 1    
red open .kibana_1              k_twoA66Shm9QWXZEsDibg 1 1  

curl http://10.128.xx.xx:5601
Kibana server is not ready yet 

logs

Aug 24 19:41:48 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:41:48Z","tags":["info","plugins","watcher"],"pid":16069,"message":"Your basic license does not support watcher. Please upgrade your license."}
Aug 24 19:41:48 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:41:48Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":16069,"message":"Starting monitoring stats collection"}
Aug 24 19:41:48 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:41:48Z","tags":["info","savedobjects-service"],"pid":16069,"message":"Starting saved objects migrations"}
Aug 24 19:41:48 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:41:48Z","tags":["info","savedobjects-service"],"pid":16069,"message":"Creating index .kibana_task_manager_1."}
Aug 24 19:41:48 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:41:48Z","tags":["info","savedobjects-service"],"pid":16069,"message":"Creating index .kibana_1."}
Aug 24 19:42:18 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:42:18Z","tags":["warning","savedobjects-service"],"pid":16069,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
Aug 24 19:42:20 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:42:20Z","tags":["warning","savedobjects-service"],"pid":16069,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/XSCzMemTSza0BDR2fdGNtA] already exists, with { index_uuid=\"XSCzMemTSza0BDR2fdGNtA\" & index=\".kibana_task_manager_1\" }"}
Aug 24 19:42:20 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:42:20Z","tags":["warning","savedobjects-service"],"pid":16069,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}
Aug 24 19:42:20 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:42:20Z","tags":["warning","savedobjects-service"],"pid":16069,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/k_twoA66Shm9QWXZEsDibg] already exists, with { index_uuid=\"k_twoA66Shm9QWXZEsDibg\" & index=\".kibana_1\" }"}
Aug 24 19:42:20 ip-10-128-xx-xx kibana[16069]: {"type":"log","@timestamp":"2020-08-24T19:42:20Z","tags":["warning","savedobjects-service"],"pid":16069,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Because the steps aren't working, I suspect that there is something else happening in the environment or the configuration that isn't set up correctly. Are you running more than one Elasticsearch node? Are you running more than one Kibana instance?

Thanks for the reply.
yes, I have 3 masters, 3 data nodes, 3 client nodes, 3 warm nodes.
I am using below setup as base, customized according to my requirement, added warm nodes.

It sounds like your Elasticsearch cluster is interacting badly with the Kibana settings. I would strongly recommend making a new post in the Elasticsearch forum to ask about the cluster settings you're using.

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.