Metricbeat data flow problem with Kibana

Hi!
I recently deployed Elasticsearch with Kibana on Azure Kubernetes environment. After that I deployed Metricbeat and it seems fine, all pods are running without problem.

But I can not see any data flow to Kibana from Metricbeat. When I open the page below, MetricBeat's status seen Offline, after that if I click "Monitor with Metricbeat", I tried lots of different configs(localhost:9200 also) but none one them worked.

"ElasticSearch Home Page" --> "Monitor the stack" --> "Setup Monitoring with Metricbeat"

Metricbeat deployed from here ----> https://github.com/elastic/beats/blob/master/deploy/kubernetes/metricbeat-kubernetes.yaml

Pods and services are like below;

>     PS C:\Users\user\Desktop\ES_YAML\2> kubectl get pods --all-namespaces -o wide
>     NAMESPACE        NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE                          NOMINATED NODE   READINESS GATES
>     default          quickstart-es-default-0               1/1     Running   0          37h     10.240.0.30   aks-aks-35064888-vmss000000   <none>           <none>
>     default          quickstart-kb-b9f8565fc-xfg2n         1/1     Running   0          37h     10.240.0.11   aks-aks-35064888-vmss000000   <none>           <none>
>     elastic-system   elastic-operator-0                    1/1     Running   0          37h     10.240.0.44   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      azure-cni-networkmonitor-l7xrl        1/1     Running   0          2d11h   10.240.0.4    aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      azure-cni-networkmonitor-p7g57        1/1     Running   0          2d11h   10.240.0.35   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      azure-ip-masq-agent-5kw7q             1/1     Running   0          18d     10.240.0.4    aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      azure-ip-masq-agent-xr778             1/1     Running   0          18d     10.240.0.35   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      coredns-autoscaler-5b6cbd75d7-dk9kd   1/1     Running   0          2d11h   10.240.0.49   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      coredns-b94d8b788-8vrks               1/1     Running   0          2d11h   10.240.0.31   aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      coredns-b94d8b788-m5r8w               1/1     Running   0          2d11h   10.240.0.36   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      kube-proxy-cgprj                      1/1     Running   0          47d     10.240.0.35   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      kube-proxy-j2wn7                      1/1     Running   0          47d     10.240.0.4    aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      metricbeat-49jms                      1/1     Running   0          3m23s   10.240.0.4    aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      metricbeat-l97v9                      1/1     Running   0          3m23s   10.240.0.35   aks-aks-35064888-vmss000001   <none>           <none>
>     kube-system      metrics-server-77c8679d7d-tp229       1/1     Running   0          2d11h   10.240.0.12   aks-aks-35064888-vmss000000   <none>           <none>
>     kube-system      tunnelfront-845c87df46-rlbxk          1/1     Running   0          2d11h   10.240.0.19   aks-aks-35064888-vmss000000   <none>           <none>
>     PS C:\Users\user\Desktop\ES_YAML\2>
> 
>     PS C:\Users\user\Desktop\ES_YAML\2> kubectl get services --all-namespaces
>     NAMESPACE        NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)         AGE
>     default          elasticsearch-cluster        ClusterIP      None           <none>          9300/TCP        11d
>     default          elasticsearch-loadbalancer   LoadBalancer   10.0.24.232    xx.xx.xxx.150   80:32177/TCP    11d
>     default          kubernetes                   ClusterIP      10.0.0.1       <none>          443/TCP         47d
>     default          quickstart-es-default        ClusterIP      None           <none>          9200/TCP        37h
>     default          quickstart-es-http           ClusterIP      10.0.180.93    <none>          9200/TCP        37h
>     default          quickstart-es-transport      ClusterIP      None           <none>          9300/TCP        37h
>     default          quickstart-kb-http           ClusterIP      10.0.156.74    <none>          5601/TCP        37h
>     elastic-system   elastic-webhook-server       ClusterIP      10.0.233.68    <none>          443/TCP         37h
>     kube-system      kube-dns                     ClusterIP      10.0.0.10      <none>          53/UDP,53/TCP   47d
>     kube-system      metrics-server               ClusterIP      10.0.183.253   <none>          443/TCP         47d
>     PS C:\Users\user\Desktop\ES_YAML\2>

How can I solve this issue?
Thanks!

In your metricbeat-daemonset-modules ConfigMap, there are two modules defined: system and kubernetes. Similarly you will need to define the elasticsearch module. Remember to set xpack.enabled: true in this module's configuration!

Shaunak

Hi Shaunak,
I'm new at ES and Kubernetes so I'm not sure how to do it.
Whole "metricbeat-daemonset-modules" file is below(output of kubectl edit configmap metricbeat-daemonset-modules)

To where should I paste "xpack.enabled: true"?

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["https://${NODE_NAME}:10250"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.verification_mode: "none"
      # If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
      # remove ssl.verification_mode entry and use the CA, for instance:
      #ssl.certificate_authorities:
        #- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    # Currently `proxy` metricset is not supported on Openshift, comment out section
    - module: kubernetes
      metricsets:
        - proxy
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10249"]
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory
    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"kubernetes.yml":"- module: kubernetes\n  metricsets:\n    - node\n    - system\n    - pod\n    - container\n    - volume\n  period: 10s\n  host: ${NODE_NAME}\n  hosts: [\"https://${NODE_NAME}:10250\"]\n  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token\n  ssl.verification_mode: \"none\"\n  # If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,\n  # remove ssl.verification_mode entry and use the CA, for instance:\n  #ssl.certificate_authorities:\n    #- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\n# Currently `proxy` metricset is not supported on Openshift, comment out section\n- module: kubernetes\n  metricsets:\n    - proxy\n  period: 10s\n  host: ${NODE_NAME}\n  hosts: [\"localhost:10249\"]","system.yml":"- module: system\n  period: 10s\n  metricsets:\n    - cpu\n    - load\n    - memory\n    - network\n    - process\n    - process_summary\n    #- core\n    #- diskio\n    #- socket\n  processes: ['.*']\n  process.include_top_n:\n    by_cpu: 5      # include top 5 processes by CPU\n    by_memory: 5   # include top 5 processes by memory\n- module: system\n  period: 1m\n  metricsets:\n    - filesystem\n    - fsstat\n  processors:\n  - drop_event.when.regexp:\n      system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"k8s-app":"metricbeat"},"name":"metricbeat-daemonset-modules","namespace":"kube-system"}}
  creationTimestamp: "2020-11-23T09:27:34Z"
  labels:
    k8s-app: metricbeat
  name: metricbeat-daemonset-modules
  namespace: kube-system
  resourceVersion: "9998918"
  selfLink: /api/v1/namespaces/kube-system/configmaps/metricbeat-daemonset-modules
  uid: 52436012-8c48-4392-b1da-46528d8fd385

Hi Again,
Regarding to my env (pods and services) above, what should I put for "HOST" and "PORT" variables on metricbeat.yaml?

Capture