Querying in Kibana Discover Timesout

Guys I have been trying to deploy ELK cluster using ELK Operator method for Azure AKS from here.
However after I deploy ELK and access Kibana and add Filebeat index creation I am unable to query anything in the Discover Tab.
I can see that the nodes that were used to deploy this ELK cluster is hardly being utilized in terms of CPU and Memory.
I have made any custom changes to deployment however only change that I made was adding our own CA certificate while deploying Kibana.
I am clueless as to where could the issue persists and cause this Timeouts.

Error in Kibana Dash :


Health Status "unknown" for elasticsearch pod
Error from elasticsearch log:
indent preformatted text by 4 spaces {"type": "server", "timestamp": "2020-11-26T18:47:14,483Z", "level": "ERROR", "component": "o.e.c.a.s.ShardStateAction", "cluster.name": "quickstart", "node.name": "quickstart-es-default-0", "message": "[filebeat-7.10.0-2020.11.26-000001][0] unexpected failure while failing shard [shard id [[filebeat-7.10.0-2020.11.26-000001][0]], allocation id [dGatSxahTQ2p1vdcEpS6Xg], primary term [0], message [shard failure, reason [lucene commit failed]], failure [IOException[No space left on device]], markAsStale [true]]", "cluster.uuid": "ibUNaDnCR_-jbjxNyqCflQ", "node.id": "FFqCPSzeQU2w1X_DeHiU9w" ,
"stacktrace": ["org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed",
"at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator.java:1431) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable.java:88) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture.java:68) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator.java:1354) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Publication.onPossibleCommitFailure(Publication.java:173) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Publication.access$500(Publication.java:42) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Publication$PublicationTarget$PublishResponseHandler.onFailure(Publication.java:369) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.Coordinator$5.onFailure(Coordinator.java:1120) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.PublicationTransportHandler$2$1.onFailure(PublicationTransportHandler.java:206) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cluster.coordination.PublicationTransportHandler.lambda$sendClusterStateToNode$6(PublicationTransportHandler.java:273) ~[elasticsearch-7.7.0.jar:7.7.0]",

{"type": "server", "timestamp": "2020-11-26T18:48:53,100Z", "level": "ERROR", "component": "o.e.t.TransportService", "cluster.name": "quickstart", "node.name": "quickstart-es-default-0", "message": "failed to handle exception for action [indices:data/read/search[phase/query]], handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.action.search.SearchTransportService$ConnectionCountingHandler@18e3b2fc/org.elasticsearch.action.search.SearchExecutionStatsCollector@7dd0f04e]", "cluster.uuid": "ibUNaDnCR_-jbjxNyqCflQ", "node.id": "FFqCPSzeQU2w1X_DeHiU9w" ,

Something to do with "o.e.t.TransportService" Please throw some light guys?

This appears to be an error originating from Elasticsearch, not Kibana. I'll dig a bit and see if anyone knows what might be at issue here.

This error may indicate the problem. Can you see what resources remain on your instance? Perhaps you've run out of disk space.

I am so sorry for the delay in responding Clint, I am using Azure Managed disk 120GB which is not even 20% utilized. However during deployment I have not specified any PV, PVC or SC and used the defaults provided in the documentation.

Is there something that I am missing? Please help :pray:

Ganesh,

To get more clarity, I'd like to know

  • if you are using this path to ship your data: Filebeat -> ES in AKS?
  • Can you access the cluster at the designated endpoint with the username/password?

Also, please check if the filebeat.yml contains elasticsearch username/password!

--
Aravind

1 Like

Hi Ganesh! Looking at this again, there's got to be something going on with the container, to where Lucene believes it's out of disk space. I reached out to some other Elasticians, and there doesn't seem to be any gotcha or hidden error here.

Did you deviate at all from the blog post instructions? That might give us a clue... you might try going through the post step-by-step again, noting if there are any errors?

Some folks are still looking at this, so you may get another reply... but that's what I would recommend in the immediate term!

I would start with the persistent volume. What's the output of kubectl get pvc and then kubectl describe pvc ... (replace the dots with the right PVC from the previous command).

1 Like

Thank you Sir fot pointing me to the correct direction, I performed a new deployment and found that after deployment it created a 1GB PV by default. I have increased the size to 100GB and it seems to have solved the issue for now testing a bit more if its not timing out

Guys I have increased the default size of the pv and pvc from 1GB to 100GB:
kubectl get pvc
pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82 100Gi RWO Delete Bound default/elasticsearch-data-quickstart-es-default-0 default 7d
kubectl get pv
elasticsearch-data-quickstart-es-default-0 Bound pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82 100Gi RWO default 7d

However elastic container(/usr/share/elasticsearch/data) sees only 1GB and its completely full:
Filesystem Size Used Avail Use% Mounted on
overlay 124G 23G 102G 19% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 124G 23G 102G 19% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 3.9G 4.0K 3.9G 1% /mnt/elastic-internal/elasticsearch-config
tmpfs 3.9G 4.0K 3.9G 1% /mnt/elastic-internal/downward-api
tmpfs 3.9G 12K 3.9G 1% /mnt/elastic-internal/xpack-file-realm
tmpfs 3.9G 4.0K 3.9G 1% /mnt/elastic-internal/probe-user
/dev/sdc 976M 960M 0 100% /usr/share/elasticsearch/data
tmpfs 3.9G 0 3.9G 0% /usr/share/elasticsearch/config/transport-remote-certs
tmpfs 3.9G 12K 3.9G 1% /usr/share/elasticsearch/config/transport-certs
tmpfs 3.9G 12K 3.9G 1% /usr/share/elasticsearch/config/http-certs
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware

Please advise guys?

Hi @aravindputrevu apologies sir missed your comment.
I can login to elastic cluster cluster using browser and localhost curl also is working fine.
I have used default filebeat configuration for AKS which is configured as daemonset on the cluster. Inside the filebeat configuration file I have provided the elastic username and password.
I am using default path setting for shipping the data haven't customized it.

Can you share your current Kubernetes manifest and the output of kubectl describe pvc ..., please? Did you try to increase the PVC with apply or did you start with a new cluster?

PS: Please format your code for better readability.

Thank you @xeraa for quick reply, here is what I found:

kubectl describe pv pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82

 Name:              pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82
    Labels:            failure-domain.beta.kubernetes.io/region=westeurope
    Annotations:       pv.kubernetes.io/bound-by-controller: yes
                       pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
                       volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
    Finalizers:        [kubernetes.io/pv-protection]
    StorageClass:      default
    Status:            Bound
    Claim:             default/elasticsearch-data-quickstart-es-default-0
    Reclaim Policy:    Delete
    Access Modes:      RWO
    VolumeMode:        Filesystem
    Capacity:          100Gi
    Node Affinity:
      Required Terms:
        Term 0:        failure-domain.beta.kubernetes.io/region in [westeurope]
    Message:
    Source:
        Type:         AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
        DiskName:     kubernetes-dynamic-pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82
        DiskURI:      /subscriptions/6ab16802-ae55-4ec9-8fd7-61d9afe8ad3d/resourceGroups/mc_rg-hcfg_eu-hcfg-stg_westeurope/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82
        Kind:         Managed
        FSType:
        CachingMode:  ReadOnly
        ReadOnly:     false
    Events:           <none> 

kubectl describe pvc elasticsearch-data-quickstart-es-default-0

Name:          elasticsearch-data-quickstart-es-default-0
Namespace:     default
StorageClass:  default
Status:        Bound
Volume:        pvc-cdbe36a8-f266-42a2-aea3-62dc3c43ee82
Labels:        common.k8s.elastic.co/type=elasticsearch
               elasticsearch.k8s.elastic.co/cluster-name=quickstart
               elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
               volume.kubernetes.io/storage-resizer: kubernetes.io/azure-disk
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    quickstart-es-default-0
Events:        <none>

However elasticsearch statefulset deployed by elastic operator still shows 1GB and I am unable to make changes as it is statefulset
kubectl describe statefulset.apps/quickstart-es-default

Name:               quickstart-es-default
Namespace:          default
CreationTimestamp:  Thu, 03 Dec 2020 17:35:08 +0000
Selector:           common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=quickstart,elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Labels:             common.k8s.elastic.co/template-hash=1623012113
                    common.k8s.elastic.co/type=elasticsearch
                    elasticsearch.k8s.elastic.co/cluster-name=quickstart
                    elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    OnDelete
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       common.k8s.elastic.co/type=elasticsearch
                elasticsearch.k8s.elastic.co/cluster-name=quickstart
                elasticsearch.k8s.elastic.co/config-hash=178912897
                elasticsearch.k8s.elastic.co/http-scheme=https
                elasticsearch.k8s.elastic.co/node-data=true
                elasticsearch.k8s.elastic.co/node-ingest=true
                elasticsearch.k8s.elastic.co/node-master=true
                elasticsearch.k8s.elastic.co/node-ml=true
                elasticsearch.k8s.elastic.co/statefulset-name=quickstart-es-default
                elasticsearch.k8s.elastic.co/version=7.7.0
  Annotations:  co.elastic.logs/module: elasticsearch
  Init Containers:
   elastic-internal-init-filesystem:
    Image:      docker.elastic.co/elasticsearch/elasticsearch:7.7.0
    Port:       <none>
    Host Port:  <none>
    Command:
      bash
      -c
      /mnt/elastic-internal/scripts/prepare-fs.sh
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:     (v1:status.podIP)
      POD_NAME:   (v1:metadata.name)
      POD_IP:     (v1:status.podIP)
      POD_NAME:   (v1:metadata.name)
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-bin-local from elastic-internal-elasticsearch-bin-local (rw)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/elasticsearch-config-local from elastic-internal-elasticsearch-config-local (rw)
      /mnt/elastic-internal/elasticsearch-plugins-local from elastic-internal-elasticsearch-plugins-local (rw)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/transport-certificates from elastic-internal-transport-certificates (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
  Containers:
   elasticsearch:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:7.7.0
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      memory:  2Gi
    Requests:
      memory:   2Gi
    Readiness:  exec [bash -c /mnt/elastic-internal/scripts/readiness-probe-script.sh] delay=10s timeout=5s period=5s #success=1 #failure=3
    Environment:
      POD_IP:                     (v1:status.podIP)
      POD_NAME:                   (v1:metadata.name)
      PROBE_PASSWORD_PATH:       /mnt/elastic-internal/probe-user/elastic-internal-probe
      PROBE_USERNAME:            elastic-internal-probe
      READINESS_PROBE_PROTOCOL:  https
      HEADLESS_SERVICE_NAME:     quickstart-es-default
      NSS_SDB_USE_CACHE:         no
    Mounts:
      /mnt/elastic-internal/downward-api from downward-api (ro)
      /mnt/elastic-internal/elasticsearch-config from elastic-internal-elasticsearch-config (ro)
      /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro)
      /mnt/elastic-internal/scripts from elastic-internal-scripts (ro)
      /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro)
      /mnt/elastic-internal/xpack-file-realm from elastic-internal-xpack-file-realm (ro)
      /usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw)
      /usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw)
      /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro)
      /usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro)
      /usr/share/elasticsearch/config/transport-remote-certs/ from elastic-internal-remote-certificate-authorities (ro)
      /usr/share/elasticsearch/data from elasticsearch-data (rw)
      /usr/share/elasticsearch/logs from elasticsearch-logs (rw)
      /usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw)
  Volumes:
   downward-api:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
   elastic-internal-elasticsearch-bin-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   elastic-internal-elasticsearch-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-default-es-config
    Optional:    false
   elastic-internal-elasticsearch-config-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   elastic-internal-elasticsearch-plugins-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   elastic-internal-http-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-http-certs-internal
    Optional:    false
   elastic-internal-probe-user:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-internal-users
    Optional:    false
   elastic-internal-remote-certificate-authorities:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-remote-ca
    Optional:    false
   elastic-internal-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      quickstart-es-scripts
    Optional:  false
   elastic-internal-transport-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-transport-certificates
    Optional:    false
   elastic-internal-unicast-hosts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      quickstart-es-unicast-hosts
    Optional:  false
   elastic-internal-xpack-file-realm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quickstart-es-xpack-file-realm
    Optional:    false
   elasticsearch-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  claim-name-placeholder
    ReadOnly:   false
   elasticsearch-logs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Volume Claims:
  Name:          elasticsearch-data
  StorageClass:
  Labels:        <none>
  Annotations:   <none>
  Capacity:      1Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  58m   statefulset-controller  create Pod quickstart-es-default-0 in StatefulSet quickstart-es-default

Please advise :pray:

Please find the attached yaml for elastic, Kibana and filebeat.
elastic.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1 
kind: Elasticsearch 
metadata: 
  name: quickstart 
spec: 
  version: 7.7.0 #Make sure you use the version of your choice 
  http: 
    service: 
      spec: 
        type: LoadBalancer #Adds a External IP 
  nodeSets: 
  - name: default 
    count: 1 
    config: 
      node.master: true 
      node.data: true 
      node.ingest: true 
      node.store.allow_mmap: false

kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1 
kind: Kibana 
metadata: 
  name: quickstart 
spec: 
  version: 7.7.0 #Make sure Kibana and Elasticsearch are on the same version. 
  http: 
    service: 
      spec: 
        type: LoadBalancer #Adds a External IP 
    tls:
      certificate:
        secretName: elk-secret
  count: 1 
  elasticsearchRef: 
    name: quickstart

filebeat.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: default
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    #filebeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      node: ${NODE_NAME}
    #      hints.enabled: true
    #      hints.default_config:
    #        type: container
    #        paths:
    #          - /var/log/containers/*${data.kubernetes.container.id}.log
    processors:
      - add_cloud_metadata:
      - add_host_metadata:
    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}
    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
        - /etc/certificate/ca.crt
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: default
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.10.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: https://quickstart-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: a5HLF5Z84P77QIkt20ikR31v
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: certs
          mountPath: /etc/certificate/ca.crt
          readOnly: true
          subPath: ca.crt
      volumes:
      - name: certs
        secret:
          secretName: quickstart-es-http-certs-public
      - name: config
        configMap:
          defaultMode: 0640
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: default
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: default
  labels:
    k8s-app: filebeat
---

@Ganesh_Sharma

Please try adding the section regarding volumeClaimTemplates into your configs and see if that solves the problem.

      nodeSets:
      - name: default 
        count: 1 
        config: 
          node.master: true 
          node.data: true 
          node.ingest: true 
          node.store.allow_mmap: false
        volumeClaimTemplates:
        - metadata:
             name: elasticsearch-data
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 100Gi
            storageClassName: standard

Reference:

Thank you @Jeff_L I have added a Storage Class , and tested the cluster for three days and it seems like it has resolved the issue. Thank you for the quick help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.