I have brought up the "elastic:7.6.2" and "kibana:7.6.2" (1 count each) using the latest kubernetes operator version 1.1.
I deployed the elastic first, waited for the pod to turn green, and then more just to make sure the server is completely ready. Following that, I deployed the "kibana", to see it become green as well.
But when I tried to hit the url, it says "Kibana server is not ready yet"!
Please find the logs below,
{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["info","savedobjects-service"],"pid":6,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
Could not create APM Agent configuration: Authentication Exception
{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack], with { header={ WWW-Authenticate={ 0=\"Bearer realm=\\\"security\\\"\" & 1=\"ApiKey\" & 2=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":[\\\"Bearer realm=\\\\\\\"security\\\\\\\"\\\",\\\"ApiKey\\\",\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"]}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elk-elk-kibana-kibana-user] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":[\\\"Bearer realm=\\\\\\\"security\\\\\\\"\\\",\\\"ApiKey\\\",\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"]}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Bearer realm=\\\"security\\\", ApiKey, Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
{"type":"log","@timestamp":"2020-05-06T18:05:38Z","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2020-05-06T18:06:08Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_task_manager_1."}
{"type":"log","@timestamp":"2020-05-06T18:06:38Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/nHePOgqSStCLaXQ2E8UWeA] already exists, with { index_uuid=\"nHePOgqSStCLaXQ2E8UWeA\" & index=\".kibana_1\" }"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/Q_xwN10qSgmSP5FOiy16cQ] already exists, with { index_uuid=\"Q_xwN10qSgmSP5FOiy16cQ\" & index=\".kibana_task_manager_1\" }"}
{"type":"log","@timestamp":"2020-05-06T18:06:42Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}
The config file, kibana.yml from the kibana pod seems like good
bash-4.2$ cat /usr/share/kibana/config/kibana.yml
elasticsearch:
hosts:
- https://elasticsearch-server-es-http.elk.svc:9200
password: 83C1k659VsGLIY4q1Ja0kKL0
ssl:
certificateAuthorities: /usr/share/kibana/config/elasticsearch-certs/ca.crt
verificationMode: certificate
username: elk-elk-kibana-kibana-user
server:
host: "0"
name: elk-kibana
ssl:
certificate: /mnt/elastic-internal/http-certs/tls.crt
enabled: true
key: /mnt/elastic-internal/http-certs/tls.key
xpack:
license_management:
ui:
enabled: false
monitoring:
ui:
container:
elasticsearch:
enabled: true
security:
encryptionKey: KxB0hlQBvaJhA4KRW1ele2MqzTQJeZQAAOHHs5whiWzVolXr6KBdJTq9kBMf6t81
And I can see the elastic server can be connected from within the kibana pod
bash-4.2$ curl --cert /mnt/elastic-internal/http-certs/tls.crt --key /mnt/elastic-internal/http-certs/tls.key -u "elk-elk-kibana-kibana-user:83C1k659VsGLIY4q1Ja0kKL0" -k "https://elasticsearch-server-es-http.elk.svc:9200/_xpack"; echo
{"build":{"hash":"ef48eb35cf30adf4db14086e8aabd07ef6fb113f","date":"2020-03-26T06:34:37.794943Z"},"license":{"uid":"6f88e1b6-bc9a-430c-8a3a-f7432f983c3c","type":"basic","mode":"basic","status":"active"},"features":{"analytics":{"available":true,"enabled":true},"ccr":{"available":false,"enabled":true},"enrich":{"available":true,"enabled":true},"flattened":{"available":true,"enabled":true},"frozen_indices":{"available":true,"enabled":true},"graph":{"available":false,"enabled":true},"ilm":{"available":true,"enabled":true},"logstash":{"available":false,"enabled":true},"ml":{"available":false,"enabled":true,"native_code_info":{"version":"7.6.2","build_hash":"e06ef9d86d5332"}},"monitoring":{"available":true,"enabled":true},"rollup":{"available":true,"enabled":true},"security":{"available":true,"enabled":true},"slm":{"available":true,"enabled":true},"spatial":{"available":true,"enabled":true},"sql":{"available":true,"enabled":true},"transform":{"available":true,"enabled":true},"vectors":{"available":true,"enabled":true},"voting_only":{"available":true,"enabled":true},"watcher":{"available":false,"enabled":true}},"tagline":"You know, for X"}
curl --cert /mnt/elastic-internal/http-certs/tls.crt --key /mnt/elastic-internal/http-certs/tls.key -u "elk-elk-kibana-kibana-user:83C1k659VsGLIY4q1Ja0kKL0" -k "https://elasticsearch-server-es-http.elk.svc:9200"
{
"name" : "elasticsearch-server-es-default-0",
"cluster_name" : "elasticsearch-server",
"cluster_uuid" : "a0Vj2ApYTg-CnbRefdHdqg",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
So I am wondering what is causing this issue and if you guys find any issues in the logs that can help with troubleshooting this issue?