Kibana fails to run in fresh deployment

Hello,

I’m trying to create an ECK instance using the official documentation.

I’m a little bit confused as each time I try to create the Kibana instance I get the same error that the indexes already exist.
This being after a fresh install and non retained PVs.

I’ve additionally tried to delete the operator and re-apply it.

I had a working version but wanted to recreate with testing of the new “fleet” features (since I only have a basic license to play with, I assume the required xpack settings are not sufficient.)

Elasticsearch yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: elasticsearch
spec:
  version: 7.9.3
  nodeSets:
  - name: eck
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        storageClassName: rook-ceph-block
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 30Gi

Kibana yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: elasticsearch
  namespace: elasticsearch
spec:
  version: 7.9.3
  count: 1
  elasticsearchRef:
    name: elasticsearch

Logs from Kibana

”warning","savedobjects-service"],"pid":7,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana."}

Hi @Samimb, thanks for your question.

This is unexpected, but it does look to me like there is some existing data that is being picked up.

Two things to try that come to mind:

  1. Can you deploy Elasticsearch under different name? This will make Pods and PVCs to have different naming as well and might help if somehow old PVs are being reused.
  2. Can you try deploying Elasticsearch with emptyDir (docs)? While not recommended for production use, this will allow to rule out storage issues completely.

Hi David,

I just tried renaming it all based on you comments. I did not however remove the CEPH
By renaming and also putting it in a differnt namespace, the pvcs get recreated.
My PVs delete as reclaim policy so there should not be anything from earlier, especially with renaming.

{"type":"log","@timestamp":"2020-10-28T08:11:46Z","tags":["info","savedobjects-service"],"pid":6,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2020-10-28T08:11:47Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_task_manager_1."}
{"type":"log","@timestamp":"2020-10-28T08:11:48Z","tags":["info","savedobjects-service"],"pid":6,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2020-10-28T08:12:11Z","tags":["info","savedobjects-service"],"pid":6,"message":"Pointing alias .kibana_task_manager to .kibana_task_manager_1."}
{"type":"log","@timestamp":"2020-10-28T08:12:18Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2020-10-28T08:12:21Z","tags":["info","savedobjects-service"],"pid":6,"message":"Finished in 33229ms."}
{"type":"log","@timestamp":"2020-10-28T08:12:27Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/HVcHOMvmQzCo972cK3aM7Q] already exists, with { index_uuid=\"HVcHOMvmQzCo972cK3aM7Q\" & index=\".kibana_1\" }"}
{"type":"log","@timestamp":"2020-10-28T08:12:27Z","tags":["warning","savedobjects-service"],"pid":6,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

this is the complete yaml with renaming

apiVersion: v1
kind: Namespace
metadata:
  name: eck
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: eck
  namespace: eck
spec:
  version: 7.9.3
  nodeSets:
  - name: hamb
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
      xpack.security.enabled: true
      xpack.security.authc.api_key.enabled: true
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        storageClassName: rook-ceph-block
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 30Gi
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: eck
  namespace: eck
spec:
  version: 7.9.3
  count: 1
  config:
    xpack.security.enabled: true
    xpack.ingestManager.fleet.tlsCheckDisabled: true
    xpack.encryptedSavedObjects.encryptionKey: "removed"
  elasticsearchRef:
    name: eck

Ok, I tried without the CEPH storage, it now comes up. Strange. I'll dig a bit deeper on this.