Initializing elasticsearch db

Openstack running VM(s) running 'Red Hat Enterprise Linux Server release 7.7 (Maipo)'
elasticsearch-oss:6.8.2
Kibana was the first clue things were bad.

This is lab. I would be happy to initialize database and loose all data if that was possible.
Any such initialize options?

Here is the history/issue:

My master and data nodes are no longer starting. I had a power outage, which resulted in fsck error. I ran successfully fsck manually on a /dev/vdX.

Now Current errors(note i have other masters)

elastic-elasticsearch-client-6458757cf5-x4m47              0/1     Running            0
elastic-elasticsearch-data-0                               1/1     Running                                     0
elastic-elasticsearch-data-1                               0/1          CrashLoopBackOff   6
elastic-elasticsearch-master-0                             0/1     CrashLoopBackOff            266 
elastic-exporter-elasticsearch-exporter-6c8c8c8854-n2f4r   1/1     Running            0
logs-fluentd-elasticsearch-4g5n4                           1/1     Running            0
logs-fluentd-elasticsearch-8hdtt                           1/1     Running            


    **DATA-1 NODE EVENTS:**  
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      data:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  data-occne-elastic-elasticsearch-data-1
        ReadOnly:   false
      config:
        Type:      ConfigMap (a volume populated by a ConfigMap)
        Name:      occne-elastic-elasticsearch
        Optional:  false
      occne-elastic-elasticsearch-data-token-bjlht:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  occne-elastic-elasticsearch-data-token-bjlht
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s

    Events:
      Type     Reason       Age                   From                      Message
      ----     ------       ----                  ----                      -------
      Warning  FailedMount  47m (x1272 over 44h)  kubelet, vcne-k8s-node-4  MountVolume.MountDevice failed for volume "pvc-ff0ede3e-d3a0-4476-87a2-2ecd6ef4fe2f" : 'fsck' found errors on device /dev/disk/by-id/virtio-9242dfe6-ee8d-42b4-9 but could not correct them: fsck from util-linux 2.23.2
    /dev/vdb contains a file system with errors, check forced.
    /dev/vdb: Directory inode 1573992, block #0, offset 0: directory corrupted


    /dev/vdb: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      (i.e., without -a or -p options)
    .
      Warning  FailedMount  37m (x1154 over 44h)   kubelet, vcne-k8s-node-4  Unable to mount volumes for pod "occne-elastic-elasticsearch-data-1_occne-infra(501dcb98-cd72-44b6-9ffc-e553d65ad98a)": timeout expired waiting for volumes to attach or mount for pod "occne-infra"/"occne-elastic-elasticsearch-data-1". list of unmounted volumes=[data]. list of unattached volumes=[data config occne-elastic-elasticsearch-data-token-bjlht]
      Warning  BackOff      3m20s (x152 over 37m)  kubelet, vcne-k8s-node-4  Back-off restarting failed container


    **MASTER EVENTS:**
        Conditions:
          Type              Status
          Initialized       True 
          Ready             False 
          ContainersReady   False 
          PodScheduled      True 
        Volumes:
          data:
            Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
            ClaimName:  data-occne-elastic-elasticsearch-master-0
            ReadOnly:   false
          config:
            Type:      ConfigMap (a volume populated by a ConfigMap)
            Name:      occne-elastic-elasticsearch
            Optional:  false
          occne-elastic-elasticsearch-master-token-5twdw:
            Type:        Secret (a volume populated by a Secret)
            SecretName:  occne-elastic-elasticsearch-master-token-5twdw
            Optional:    false
        QoS Class:       Burstable
        Node-Selectors:  <none>
        Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                         node.kubernetes.io/unreachable:NoExecute for 300s
        Events:
          Type     Reason   Age                         From                      Message
          ----     ------   ----                        ----                      -------
          Warning  BackOff  <invalid> (x6440 over 23h)  kubelet, vcne-k8s-node-1  Back-off restarting failed container

Any help appreciated.
Paul

@Paul_Stevens after you perform fsck, could you try to restart the pod manually?

I did restart pods manually. Somehow the sequence of doing fsck(s) and restarting master-0 I had all 3 master running. Then reset data-X pods and now i am running again. Learning the process....
Thanks for the update we can close this.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.