Elastic search Kubernetes cluster_uuid _na_

I have deployed the elastic search in GKE. I have scaled the elastic search manually and faced a 503 error when accessing the index count.

I need to get the data stored in the persistence disk.

I am unable to find the exact reason why this caused. Also, suggest ways to get the data from the disk and prevent this error future.

Find the deployment YAML file.

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 7.9.1
  nodeSets:
    - name: default
      config:
        node.master: true
        node.data: true
        node.ingest: true
        node.ml: true
      podTemplate:
        metadata:
          labels:
            app: elasticsearch
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
              command: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]
            - name: install-plugin
              command:
                - sh
                - -c
                - |
                  bin/elasticsearch-plugin install --batch repository-gcs
            - name: add-gcs-key
              command:
                - sh
                - -c
                - |
                  echo y | bin/elasticsearch-keystore add-file gcs.client.default.credentials_file ./key/gcs_backup_key.json
              volumeMounts:
                - name: gcs-backup-key
                  mountPath: "/usr/share/elasticsearch/key"
                  readOnly: true
          containers:
            - name: elasticsearch
              resources:
                requests:
                  memory: 2Gi
                limits:
                  memory: 2Gi
              env:
                - name: ES_JAVA_OPTS
                  value: "-Xms1g -Xmx1g -XX:-HeapDumpOnOutOfMemoryError"
          volumes:
            - name: gcs-backup-key
              secret:
                secretName: gcs-backup-key
      count: 1
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            storageClassName: ssd
            resources:
              requests:
                storage: 10Gi
  http:
    service:
      spec:
        type: NodePort
    tls:
      selfSignedCertificate:
        disabled: true

Can you provide the logs of the Elasticsearch Pods so we better understand what's going on? (kubectl logs <pod-name>)
Also can you explain which Pods are currently running for that cluster? (kubectl get pods)

The data should normally be stored in the PersistentVolume bound to these Pods.

@sebgl Find the pod logs

{"type": "server", "timestamp": "2020-11-18T11:29:52,839Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "development", "node.name": "development-es-default-0", "message": "path: /_cluster/health, params: {}",
"stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",
"at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:230) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:335) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:601) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.0.jar:7.10.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-11-18T11:30:01,701Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:11,702Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:21,702Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:31,703Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:41,704Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:51,704Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:30:52,857Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "development", "node.name": "development-es-default-0", "message": "path: /_cluster/health, params: {}",
"stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",
"at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:230) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:335) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:601) [elasticsearch-7.10.0.jar:7.10.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.0.jar:7.10.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:832) [?:?]"] }
{"type": "server", "timestamp": "2020-11-18T11:31:01,705Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }
{"type": "server", "timestamp": "2020-11-18T11:31:11,706Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "development", "node.name": "development-es-default-0", "message": "master not discovered or elected yet, an election requires at least 3 nodes with ids from [pcf6W9pOTOyi5QnFFrD0mw, tbtjw3MQQ0CE4QeMV9OXrw, fu4V8Sx6QAenYs5vjijZJg, PCeKaYBXQz-Jgjvnb1kzoQ, RSUlAxWpQYW_L0FFRnjmYg], have discovered [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, 10.12.9.3:9300] from hosts providers and [{development-es-default-0}{RSUlAxWpQYW_L0FFRnjmYg}{7fa72an3TcqItCZ_fwxmxg}{10.12.7.205}{10.12.7.205:9300}{cdhilmrstw}{ml.machine_memory=2147483648, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}] from last-known cluster state; node term 29, last-accepted version 926105 in term 29" }