Kibana server is not ready yet [7.6.2]

I have brought up the "elastic:7.6.2" and "kibana:7.6.2" (1 count each) using the latest kubernetes operator version 1.1.

I deployed the elastic first, waited for the pod to turn green, and then more just to make sure the server is completely ready. Following that, I deployed the "kibana", to see it become green as well.

But when I tried to hit the url, it says "Kibana server is not ready yet"!

Please find the logs below,

{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["info","savedobjects-service"],"pid":6,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
Could not create APM Agent configuration: Authentication Exception
{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack], with { header={ WWW-Authenticate={ 0=\"Bearer realm=\\\"security\\\"\" & 1=\"ApiKey\" & 2=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":[\\\"Bearer realm=\\\\\\\"security\\\\\\\"\\\",\\\"ApiKey\\\",\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"]}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":[\\\"Bearer realm=\\\\\\\"security\\\\\\\"\\\",\\\"ApiKey\\\",\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"]}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Bearer realm=\\\"security\\\", ApiKey, Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_task_manager_1."}
{"type":"log","@timestamp":"2020-05-06T18:06:38Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/nHePOgqSStCLaXQ2E8UWeA] already exists, with { index_uuid=\"nHePOgqSStCLaXQ2E8UWeA\" & index=\".kibana_1\" }"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/Q_xwN10qSgmSP5FOiy16cQ] already exists, with { index_uuid=\"Q_xwN10qSgmSP5FOiy16cQ\" & index=\".kibana_task_manager_1\" }"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}

The config file, kibana.yml from the kibana pod seems like good

bash-4.2$ cat /usr/share/kibana/config/kibana.yml
elasticsearch:
  hosts:
  - https://elasticsearch-server-es-http.elk.svc:9200
  password: 83C1k659VsGLIY4q1Ja0kKL0
  ssl:
    certificateAuthorities: /usr/share/kibana/config/elasticsearch-certs/ca.crt
    verificationMode: certificate
  username: elk-elk-kibana-kibana-user
server:
  host: "0"
  name: elk-kibana
  ssl:
    certificate: /mnt/elastic-internal/http-certs/tls.crt
    enabled: true
    key: /mnt/elastic-internal/http-certs/tls.key
xpack:
  license_management:
    ui:
      enabled: false
  monitoring:
    ui:
      container:
        elasticsearch:
          enabled: true
  security:
    encryptionKey: KxB0hlQBvaJhA4KRW1ele2MqzTQJeZQAAOHHs5whiWzVolXr6KBdJTq9kBMf6t81

And I can see the elastic server can be connected from within the kibana pod

bash-4.2$ curl --cert  /mnt/elastic-internal/http-certs/tls.crt --key /mnt/elastic-internal/http-certs/tls.key  -u "elk-elk-kibana-kibana-user:83C1k659VsGLIY4q1Ja0kKL0" -k "https://elasticsearch-server-es-http.elk.svc:9200/_xpack"; echo

{"build":{"hash":"ef48eb35cf30adf4db14086e8aabd07ef6fb113f","date":"2020-03-26T06:34:37.794943Z"},"license":{"uid":"6f88e1b6-bc9a-430c-8a3a-f7432f983c3c","type":"basic","mode":"basic","status":"active"},"features":{"analytics":{"available":true,"enabled":true},"ccr":{"available":false,"enabled":true},"enrich":{"available":true,"enabled":true},"flattened":{"available":true,"enabled":true},"frozen_indices":{"available":true,"enabled":true},"graph":{"available":false,"enabled":true},"ilm":{"available":true,"enabled":true},"logstash":{"available":false,"enabled":true},"ml":{"available":false,"enabled":true,"native_code_info":{"version":"7.6.2","build_hash":"e06ef9d86d5332"}},"monitoring":{"available":true,"enabled":true},"rollup":{"available":true,"enabled":true},"security":{"available":true,"enabled":true},"slm":{"available":true,"enabled":true},"spatial":{"available":true,"enabled":true},"sql":{"available":true,"enabled":true},"transform":{"available":true,"enabled":true},"vectors":{"available":true,"enabled":true},"voting_only":{"available":true,"enabled":true},"watcher":{"available":false,"enabled":true}},"tagline":"You know, for X"}
curl --cert  /mnt/elastic-internal/http-certs/tls.crt --key /mnt/elastic-internal/http-certs/tls.key  -u "elk-elk-kibana-kibana-user:83C1k659VsGLIY4q1Ja0kKL0" -k "https://elasticsearch-server-es-http.elk.svc:9200"

{
  "name" : "elasticsearch-server-es-default-0",
  "cluster_name" : "elasticsearch-server",
  "cluster_uuid" : "a0Vj2ApYTg-CnbRefdHdqg",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

So I am wondering what is causing this issue and if you guys find any issues in the logs that can help with troubleshooting this issue? :face_with_monocle:

Are you sure no other Kibana instance are pointing to this index? Can you try deleting both .kibana_1 and .kibana_2 then restarting? IF the original .kibana index still exists, and there is a message telling you another instance is still running - try deleting the .kibana_1 and .kibana_2 if it exists. Then restart Kibana and provide the logs there, and additionally any logs from Elasticsearch. I presume something is failing along the way.

curl -XDELETE http://localhost:9200/.kibana* ( this will delete all the kibana indices) 

But I do see a authentication exception in the logs am afraid if u r hitting the same issue as [APM] Authorization exception on Kibana startup · Issue #45610 · elastic/kibana · GitHub

can you confirm ?

cc @OliverG

I do not have multiple Kibana instances running. From the logs I could see couple of things that was kind of bothering me.

Operator logs prints the below error when I do Kibana deployment

$ kubectl -n elastic-system logs -f statefulset.apps/elastic-operator | grep error

{"log.level":"error","@timestamp":"2020-05-06T21:46:37.593Z","log.logger":"controller-runtime.controller","message":"Reconciler error","service.version":"1.1.0-29e7447f","service.type":"eck","ecs.version":"1.4.0","controll
er":"kibana-controller","request":"elk/elk-kibana","error":"Operation cannot be fulfilled on kibanas.kibana.k8s.elastic.co \"elk-kibana\": the object has been modified; please apply your changes to the latest version and t
ry again","error.stack_trace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-ru
ntime@v0.5.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go
:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/u
til/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:88"}

And the elasticserver logs have all sorts of dubious warnings and errors. I am using a PV mounted as "elasticsearch-data". Attached is the elastic logs

But I can probe the health of elastic server from the kibana pod and it comes back as green

bash-4.2$ curl --cert  /mnt/elastic-internal/http-certs/tls.crt --key /mnt/elastic-internal/http-certs/tls.key  -u "elk-elk-kibana-kibana-user:gLAX6mK445P2ng39eh80vJ1m" -k "https://elasticsearch-server-es-http.elk.svc:9200/_cluster/health?pretty"
{
  "cluster_name" : "elasticsearch-server",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 3,
  "active_shards" : 3,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Do you see any issues?

Anyone have any idea on this? I have tried deleting all the indices and all have tried multiple times.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.