Metricbeat : Error fetching data for metricset elasticsearch

I have setup metricbeat in kubernetes with following deployment manifest

DaemonSet
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    app: metricbeat
rules:
- apiGroups:
  - ""
  - extensions
  - apps
  resources:
  - namespaces
  - pods
  - services
  - events
  - deployments
  - nodes
  - nodes/stats
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - deployments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
- nonResourceURLs:
  - "/metrics"
  verbs:
  - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-logging
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-policy
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  ilm-policy.json: |-
    {
    	"policy": {
    		"phases": {
    			"hot": {
    				"actions": {
    					"rollover": {
    						"max_age": "3d",
    						"max_size": "50gb"
    					},
    					"set_priority": {
    						"priority": 100
    					}
    				},
    				"min_age": "0ms"
    			},
    			"warm": {
    				"actions": {
    					"readonly": {},
    					"set_priority": {
    						"priority": 50
    					}
    				},
    				"min_age": "5d"
    			},
    			"cold": {
    				"actions": {
    					"freeze": {},
    					"set_priority": {
    						"priority": 0
    					}
    				},
    				"min_age": "6d"
    			},
    			"delete": {
    				"actions": {
    					"delete": {
    						"delete_searchable_snapshot": true
    					}
    				},
    				"min_age": "7d"
    			}
    		}
    	}
    }

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    #metricbeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      host: ${NODE_NAME}
    #      hints.enabled: true

    processors:
      - add_cloud_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # ====================== Index Lifecycle Management (ILM) ======================

    # Configure index lifecycle management (ILM). These settings create a write
    # alias and add additional settings to the index template. When ILM is enabled,
    # output.elasticsearch.index is ignored, and the write alias is used to set the
    # index name.

    # Enable ILM support. Valid values are true, false, and auto. When set to auto
    # (the default), the Beat uses index lifecycle management when it connects to a
    # cluster that supports ILM; otherwise, it creates daily indices.
    setup.ilm.enabled: true

    # Set the prefix used in the index lifecycle write alias name. The default alias
    # name is 'metricbeat-%{[agent.version]}'.
    # setup.ilm.rollover_alias: 'metricbeat'

    # Set the rollover index pattern. The default is "%{now/d}-000001".
    setup.ilm.pattern: "{now/d}-000001"

    # Set the lifecycle policy name. The default policy name is
    # 'beatname'.
    setup.ilm.policy_name: "metricbeat-rollover-7-days"

    # The path to a JSON file that contains a lifecycle policy configuration. Used
    # to load your own lifecycle policy.
    setup.ilm.policy_file: /usr/share/metricbeat/policy/ilm-policy.json

    # Disable the check for an existing lifecycle policy. The default is true. If
    # you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
    # can be installed.
    setup.ilm.check_exists: true

    # Overwrite the lifecycle policy at startup. The default is false.
    setup.ilm.overwrite: true


    # ================================== Logging ===================================

    # There are four options for the log output: file, stderr, syslog, eventlog
    # The file output is the default.

    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    logging.level: info

    # Enable debug output for selected components. To enable all selectors use ["*"]
    # Other available selectors are "beat", "publisher", "service"
    # Multiple selectors can be chained.
    #logging.selectors: [ ]

    # Send all logging output to stderr. The default is false.
    #logging.to_stderr: false

    # Send all logging output to syslog. The default is false.
    #logging.to_syslog: false

    # Send all logging output to Windows Event Logs. The default is false.
    #logging.to_eventlog: false

    # If enabled, Metricbeat periodically logs its internal metrics that have changed
    # in the last period. For each metric that changed, the delta from the value at
    # the beginning of the period is logged. Also, the total values for
    # all non-zero internal metrics are logged on shutdown. The default is true.
    logging.metrics.enabled: true

    # ============================= X-Pack Monitoring ==============================
    # Metricbeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.

    # Set to true to enable the monitoring reporter.
    monitoring.enabled: true

    # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
    # Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
    # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
    #monitoring.cluster_uuid:

    # Uncomment to send the metrics to Elasticsearch. Most settings from the
    # Elasticsearch output are accepted here as well.
    # Note that the settings should point to your Elasticsearch *monitoring* cluster.
    # Any setting that is not set is automatically inherited from the Elasticsearch
    # output configuration, so if you have the Elasticsearch output configured such
    # that it is pointing to your Elasticsearch monitoring cluster, you can simply
    # uncomment the following line.
    monitoring.elasticsearch:

      # Array of hosts to connect to.
      # Scheme and port can be left out and will be set to the default (http and 9200)
      # In case you specify and additional path, the scheme is required: http://localhost:9200/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
      #hosts: ["localhost:9200"]

      # Set gzip compression level.
      #compression_level: 0

      # Protocol - either `http` (default) or `https`.
      protocol: "http"

      # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
      # The default is 50.
      bulk_max_size: 50

      # The number of seconds to wait before trying to reconnect to Elasticsearch
      # after a network error. After waiting backoff.init seconds, the Beat
      # tries to reconnect. If the attempt fails, the backoff timer is increased
      # exponentially up to backoff.max. After a successful connection, the backoff
      # timer is reset. The default is 1s.
      backoff.init: 1s

      # The maximum number of seconds to wait before attempting to connect to
      # Elasticsearch after a network error. The default is 60s.
      backoff.max: 60s

      # Configure HTTP request timeout before failing an request to Elasticsearch.
      timeout: 90

      metrics.period: 10s
      state.period: 1m

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        - core
        - diskio
        - socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - volume
        - event
        - container
      period: 10s
      enabled: true
      hosts: ["localhost:10250"]
      add_metadata: true
    - module: kubernetes
      enabled: true
      metricsets:
        - proxy
      hosts: ["localhost:10249"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - controllermanager
      hosts: ["localhost:10252"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - scheduler
      hosts: ["localhost:10251"]
      period: 10s

  elasticsearch.yml: |-
    - module: elasticsearch
      xpack.enabled: true
      period: 10s
      hosts: ["http://localhost:9200"]
      username: "elastic"
      password: "changeme"
      metricsets:
        - node
        - node_stats
        - index
        - index_recovery
        - index_summary
        - shard
        #- ml_job

  traefik.yml: |-
    - module: traefik
      metricsets: ["health"]
      period: 10s
      hosts: ["localhost:8080"]

  linux.yml: |-
    - module: linux
      period: 10s
      metricsets:
        - "pageinfo"
        - "memory"
        # - ksm
        # - conntrack
        # - iostat
      enabled: true
      #hostfs: /hostfs

---
#############
# DAEMONSET #
#############

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat
  minReadySeconds: 10
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.17.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: policy
          mountPath: /usr/share/metricbeat/policy
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
      - name: policy
        configMap:
          defaultMode: 0600
          name: metricbeat-policy

---
Deployment
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-config
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # ====================== Index Lifecycle Management (ILM) ======================

    # Configure index lifecycle management (ILM). These settings create a write
    # alias and add additional settings to the index template. When ILM is enabled,
    # output.elasticsearch.index is ignored, and the write alias is used to set the
    # index name.

    # Enable ILM support. Valid values are true, false, and auto. When set to auto
    # (the default), the Beat uses index lifecycle management when it connects to a
    # cluster that supports ILM; otherwise, it creates daily indices.
    setup.ilm.enabled: true

    # Set the prefix used in the index lifecycle write alias name. The default alias
    # name is 'metricbeat-%{[agent.version]}'.
    # setup.ilm.rollover_alias: 'metricbeat'

    # Set the rollover index pattern. The default is "%{now/d}-000001".
    setup.ilm.pattern: "{now/d}-000001"

    # Set the lifecycle policy name. The default policy name is
    # 'beatname'.
    setup.ilm.policy_name: "metricbeat-rollover-7-days"

    # The path to a JSON file that contains a lifecycle policy configuration. Used
    # to load your own lifecycle policy.
    setup.ilm.policy_file: /usr/share/metricbeat/policy/ilm-policy.json

    # Disable the check for an existing lifecycle policy. The default is true. If
    # you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
    # can be installed.
    setup.ilm.check_exists: true

    # Overwrite the lifecycle policy at startup. The default is false.
    setup.ilm.overwrite: true


    # ================================== Logging ===================================

    # There are four options for the log output: file, stderr, syslog, eventlog
    # The file output is the default.

    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    logging.level: info

    # Enable debug output for selected components. To enable all selectors use ["*"]
    # Other available selectors are "beat", "publisher", "service"
    # Multiple selectors can be chained.
    #logging.selectors: [ ]

    # Send all logging output to stderr. The default is false.
    #logging.to_stderr: false

    # Send all logging output to syslog. The default is false.
    #logging.to_syslog: false

    # Send all logging output to Windows Event Logs. The default is false.
    #logging.to_eventlog: false

    # If enabled, Metricbeat periodically logs its internal metrics that have changed
    # in the last period. For each metric that changed, the delta from the value at
    # the beginning of the period is logged. Also, the total values for
    # all non-zero internal metrics are logged on shutdown. The default is true.
    logging.metrics.enabled: true

    # ============================= X-Pack Monitoring ==============================
    # Metricbeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.

    # Set to true to enable the monitoring reporter.
    monitoring.enabled: true

    # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
    # Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
    # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
    #monitoring.cluster_uuid:

    # Uncomment to send the metrics to Elasticsearch. Most settings from the
    # Elasticsearch output are accepted here as well.
    # Note that the settings should point to your Elasticsearch *monitoring* cluster.
    # Any setting that is not set is automatically inherited from the Elasticsearch
    # output configuration, so if you have the Elasticsearch output configured such
    # that it is pointing to your Elasticsearch monitoring cluster, you can simply
    # uncomment the following line.
    monitoring.elasticsearch:

      # Array of hosts to connect to.
      # Scheme and port can be left out and will be set to the default (http and 9200)
      # In case you specify and additional path, the scheme is required: http://localhost:9200/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
      #hosts: ["localhost:9200"]

      # Set gzip compression level.
      #compression_level: 0

      # Protocol - either `http` (default) or `https`.
      protocol: "http"

      # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
      # The default is 50.
      bulk_max_size: 50

      # The number of seconds to wait before trying to reconnect to Elasticsearch
      # after a network error. After waiting backoff.init seconds, the Beat
      # tries to reconnect. If the attempt fails, the backoff timer is increased
      # exponentially up to backoff.max. After a successful connection, the backoff
      # timer is reset. The default is 1s.
      backoff.init: 1s

      # The maximum number of seconds to wait before attempting to connect to
      # Elasticsearch after a network error. The default is 60s.
      backoff.max: 60s

      # Configure HTTP request timeout before failing an request to Elasticsearch.
      timeout: 90

      metrics.period: 10s
      state.period: 1m

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  kubernetes.yml: |-
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_daemonset
        - state_deployment
        - state_replicaset
        - state_statefulset
        - state_pod
        - state_container
        - state_cronjob
        - state_resourcequota
        - state_service
        - state_persistentvolume
        - state_persistentvolumeclaim
        - state_storageclass
      period: 10s
      hosts: ["kube-state-metrics:8080"]
      add_metadata: true
      in_cluster: true

  kibana.yml: |-
    - module: kibana
      metricsets: ["status"]
      period: 10s
      hosts: ["kibana:5601"]
      basepath: ""
      enabled: true

---

# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.17.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 200m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: policy
          mountPath: /usr/share/metricbeat/policy
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
      - name: policy
        configMap:
          defaultMode: 0600
          name: metricbeat-policy

---

But in logs i noticed this

2022-03-03T21:46:29.607Z	ERROR	metrics/metrics.go:304	error determining cgroups version: error reading /proc/12824/cgroup: open /proc/12824/cgroup: no such file or directory
2022-03-03T21:46:29.879Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_recovery: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_summary: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ml_job: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.enrich: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.881Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.shard: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.881Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.node_stats: error making http request: Get "http://localhost:9200/_nodes/_local/stats": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.881Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ccr: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:29.882Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.cluster_stats: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:30.279Z	ERROR	module/wrapper.go:259	Error fetching data for metricset traefik.health: failed to sample health: HTTP error 404 in : 404 Not Found
2022-03-03T21:46:30.887Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:30.887Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:31.073Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-03T21:46:31.073Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-03T21:46:31.079Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:31.373Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:31.373Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:39.608Z	ERROR	metrics/metrics.go:304	error determining cgroups version: error reading /proc/12824/cgroup: open /proc/12824/cgroup: no such file or directory
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ml_job: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_recovery: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.node_stats: error making http request: Get "http://localhost:9200/_nodes/_local/stats": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ccr: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.enrich: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.cluster_stats: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.881Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_summary: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:39.881Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.shard: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:40.281Z	ERROR	module/wrapper.go:259	Error fetching data for metricset traefik.health: failed to sample health: HTTP error 404 in : 404 Not Found
2022-03-03T21:46:40.901Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:40.903Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:41.076Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-03T21:46:41.076Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-03T21:46:41.079Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:41.373Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:41.373Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:49.608Z	ERROR	metrics/metrics.go:304	error determining cgroups version: error reading /proc/12824/cgroup: open /proc/12824/cgroup: no such file or directory
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_summary: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ccr: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.node_stats: error making http request: Get "http://localhost:9200/_nodes/_local/stats": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.ml_job: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.enrich: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.cluster_stats: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.880Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.index_recovery: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:49.882Z	ERROR	module/wrapper.go:259	Error fetching data for metricset elasticsearch.shard: error determining if connected Elasticsearch node is master: error making http request: Get "http://localhost:9200/_nodes/_local/nodes": dial tcp 127.0.0.1:9200: connect: connection refused
2022-03-03T21:46:50.279Z	ERROR	module/wrapper.go:259	Error fetching data for metricset traefik.health: failed to sample health: HTTP error 404 in : 404 Not Found
2022-03-03T21:46:50.886Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:50.886Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:51.073Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-03T21:46:51.073Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-03T21:46:51.079Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:51.373Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:51.373Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-03T21:46:59.588Z	ERROR	metrics/metrics.go:304	error determining cgroups version: error reading /proc/12824/cgroup: open /proc/12824/cgroup: no such file or directory

What I am missing here ?

Well assuming you want to monitoring this elasticsearch which you are writing to

But in the metricbeat config you have

  elasticsearch.yml: |-
    - module: elasticsearch
      xpack.enabled: true
      period: 10s
      hosts: ["http://localhost:9200"] <!------ probable should be http://elasticsearch:9200"
      username: "elastic"
      password: "changeme"
      metricsets:
        - node
        - node_stats
        - index
        - index_recovery
        - index_summary
        - shard
        #- ml_job

And your error message say there is now elasticsearch on localhost

error making http request: Get "http://localhost:9200/_nodes/_local/stats": dial tcp 127.0.0.1:9200: connect: connection refused

Perhaps because there is no elasticsearch running on the localhost it is running on elasticsearch node

1 Like

I fixed the Elasticsearch module. Now I have following deployment manifest

DaemonSet
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    app: metricbeat
rules:
- apiGroups:
  - ""
  - extensions
  - apps
  resources:
  - namespaces
  - pods
  - services
  - events
  - deployments
  - nodes
  - nodes/stats
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - replicasets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - deployments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
- nonResourceURLs:
  - "/metrics"
  verbs:
  - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-logging
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-policy
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  ilm-policy.json: |-
    {
    	"policy": {
    		"phases": {
    			"hot": {
    				"actions": {
    					"rollover": {
    						"max_age": "3d",
    						"max_size": "50gb"
    					},
    					"set_priority": {
    						"priority": 100
    					}
    				},
    				"min_age": "0ms"
    			},
    			"warm": {
    				"actions": {
    					"readonly": {},
    					"set_priority": {
    						"priority": 50
    					}
    				},
    				"min_age": "5d"
    			},
    			"cold": {
    				"actions": {
    					"freeze": {},
    					"set_priority": {
    						"priority": 0
    					}
    				},
    				"min_age": "6d"
    			},
    			"delete": {
    				"actions": {
    					"delete": {
    						"delete_searchable_snapshot": true
    					}
    				},
    				"min_age": "7d"
    			}
    		}
    	}
    }

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    #metricbeat.autodiscover:
    #  providers:
    #    - type: kubernetes
    #      host: ${NODE_NAME}
    #      hints.enabled: true

    processors:
      - add_cloud_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # ====================== Index Lifecycle Management (ILM) ======================

    # Configure index lifecycle management (ILM). These settings create a write
    # alias and add additional settings to the index template. When ILM is enabled,
    # output.elasticsearch.index is ignored, and the write alias is used to set the
    # index name.

    # Enable ILM support. Valid values are true, false, and auto. When set to auto
    # (the default), the Beat uses index lifecycle management when it connects to a
    # cluster that supports ILM; otherwise, it creates daily indices.
    setup.ilm.enabled: true

    # Set the prefix used in the index lifecycle write alias name. The default alias
    # name is 'metricbeat-%{[agent.version]}'.
    # setup.ilm.rollover_alias: 'metricbeat'

    # Set the rollover index pattern. The default is "%{now/d}-000001".
    setup.ilm.pattern: "{now/d}-000001"

    # Set the lifecycle policy name. The default policy name is
    # 'beatname'.
    setup.ilm.policy_name: "metricbeat-rollover-7-days"

    # The path to a JSON file that contains a lifecycle policy configuration. Used
    # to load your own lifecycle policy.
    setup.ilm.policy_file: /usr/share/metricbeat/policy/ilm-policy.json

    # Disable the check for an existing lifecycle policy. The default is true. If
    # you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
    # can be installed.
    setup.ilm.check_exists: true

    # Overwrite the lifecycle policy at startup. The default is false.
    setup.ilm.overwrite: true


    # ================================== Logging ===================================

    # There are four options for the log output: file, stderr, syslog, eventlog
    # The file output is the default.

    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    logging.level: info

    # Enable debug output for selected components. To enable all selectors use ["*"]
    # Other available selectors are "beat", "publisher", "service"
    # Multiple selectors can be chained.
    #logging.selectors: [ ]

    # Send all logging output to stderr. The default is false.
    #logging.to_stderr: false

    # Send all logging output to syslog. The default is false.
    #logging.to_syslog: false

    # Send all logging output to Windows Event Logs. The default is false.
    #logging.to_eventlog: false

    # If enabled, Metricbeat periodically logs its internal metrics that have changed
    # in the last period. For each metric that changed, the delta from the value at
    # the beginning of the period is logged. Also, the total values for
    # all non-zero internal metrics are logged on shutdown. The default is true.
    logging.metrics.enabled: true

    # ============================= X-Pack Monitoring ==============================
    # Metricbeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.

    # Set to true to enable the monitoring reporter.
    monitoring.enabled: true

    # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
    # Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
    # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
    #monitoring.cluster_uuid:

    # Uncomment to send the metrics to Elasticsearch. Most settings from the
    # Elasticsearch output are accepted here as well.
    # Note that the settings should point to your Elasticsearch *monitoring* cluster.
    # Any setting that is not set is automatically inherited from the Elasticsearch
    # output configuration, so if you have the Elasticsearch output configured such
    # that it is pointing to your Elasticsearch monitoring cluster, you can simply
    # uncomment the following line.
    monitoring.elasticsearch:

      # Array of hosts to connect to.
      # Scheme and port can be left out and will be set to the default (http and 9200)
      # In case you specify and additional path, the scheme is required: http://localhost:9200/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
      #hosts: ["localhost:9200"]

      # Set gzip compression level.
      #compression_level: 0

      # Protocol - either `http` (default) or `https`.
      protocol: "http"

      # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
      # The default is 50.
      bulk_max_size: 50

      # The number of seconds to wait before trying to reconnect to Elasticsearch
      # after a network error. After waiting backoff.init seconds, the Beat
      # tries to reconnect. If the attempt fails, the backoff timer is increased
      # exponentially up to backoff.max. After a successful connection, the backoff
      # timer is reset. The default is 1s.
      backoff.init: 1s

      # The maximum number of seconds to wait before attempting to connect to
      # Elasticsearch after a network error. The default is 60s.
      backoff.max: 60s

      # Configure HTTP request timeout before failing an request to Elasticsearch.
      timeout: 90

      metrics.period: 10s
      state.period: 1m

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  # system.yml: |-
  #   - module: system
  #     period: 10s
  #     metricsets:
  #       - cpu
  #       - load
  #       - memory
  #       - network
  #       - process
  #       - process_summary
  #       - core
  #       - diskio
  #       - socket
  #     processes: ['.*']
  #     process.include_top_n:
  #       by_cpu: 5      # include top 5 processes by CPU
  #       by_memory: 5   # include top 5 processes by memory
  #
  #   - module: system
  #     period: 1m
  #     metricsets:
  #       - filesystem
  #       - fsstat
  #     processors:
  #     - drop_event.when.regexp:
  #         system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'

  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - volume
        - event
        - container
      period: 10s
      enabled: true
      hosts: ["localhost:10250"]
      add_metadata: true
    - module: kubernetes
      enabled: true
      metricsets:
        - proxy
      hosts: ["localhost:10249"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - controllermanager
      hosts: ["localhost:10252"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - scheduler
      hosts: ["localhost:10251"]
      period: 10s

  # linux.yml: |-
  #   - module: linux
  #     period: 10s
  #     metricsets:
  #       - "pageinfo"
  #       - "memory"
  #       # - ksm
  #       # - conntrack
  #       # - iostat
  #     enabled: true
  #     #hostfs: /hostfs

---
#############
# DAEMONSET #
#############

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat
  minReadySeconds: 10
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.17.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: policy
          mountPath: /usr/share/metricbeat/policy
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
      - name: policy
        configMap:
          defaultMode: 0600
          name: metricbeat-policy

---
Deployment
---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-config
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}

    # ====================== Index Lifecycle Management (ILM) ======================

    # Configure index lifecycle management (ILM). These settings create a write
    # alias and add additional settings to the index template. When ILM is enabled,
    # output.elasticsearch.index is ignored, and the write alias is used to set the
    # index name.

    # Enable ILM support. Valid values are true, false, and auto. When set to auto
    # (the default), the Beat uses index lifecycle management when it connects to a
    # cluster that supports ILM; otherwise, it creates daily indices.
    setup.ilm.enabled: true

    # Set the prefix used in the index lifecycle write alias name. The default alias
    # name is 'metricbeat-%{[agent.version]}'.
    # setup.ilm.rollover_alias: 'metricbeat'

    # Set the rollover index pattern. The default is "%{now/d}-000001".
    setup.ilm.pattern: "{now/d}-000001"

    # Set the lifecycle policy name. The default policy name is
    # 'beatname'.
    setup.ilm.policy_name: "metricbeat-rollover-7-days"

    # The path to a JSON file that contains a lifecycle policy configuration. Used
    # to load your own lifecycle policy.
    setup.ilm.policy_file: /usr/share/metricbeat/policy/ilm-policy.json

    # Disable the check for an existing lifecycle policy. The default is true. If
    # you disable this check, set setup.ilm.overwrite: true so the lifecycle policy
    # can be installed.
    setup.ilm.check_exists: true

    # Overwrite the lifecycle policy at startup. The default is false.
    setup.ilm.overwrite: true


    # ================================== Logging ===================================

    # There are four options for the log output: file, stderr, syslog, eventlog
    # The file output is the default.

    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    logging.level: info

    # Enable debug output for selected components. To enable all selectors use ["*"]
    # Other available selectors are "beat", "publisher", "service"
    # Multiple selectors can be chained.
    #logging.selectors: [ ]

    # Send all logging output to stderr. The default is false.
    #logging.to_stderr: false

    # Send all logging output to syslog. The default is false.
    #logging.to_syslog: false

    # Send all logging output to Windows Event Logs. The default is false.
    #logging.to_eventlog: false

    # If enabled, Metricbeat periodically logs its internal metrics that have changed
    # in the last period. For each metric that changed, the delta from the value at
    # the beginning of the period is logged. Also, the total values for
    # all non-zero internal metrics are logged on shutdown. The default is true.
    logging.metrics.enabled: true

    # ============================= X-Pack Monitoring ==============================
    # Metricbeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.

    # Set to true to enable the monitoring reporter.
    monitoring.enabled: true

    # Sets the UUID of the Elasticsearch cluster under which monitoring data for this
    # Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
    # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
    #monitoring.cluster_uuid:

    # Uncomment to send the metrics to Elasticsearch. Most settings from the
    # Elasticsearch output are accepted here as well.
    # Note that the settings should point to your Elasticsearch *monitoring* cluster.
    # Any setting that is not set is automatically inherited from the Elasticsearch
    # output configuration, so if you have the Elasticsearch output configured such
    # that it is pointing to your Elasticsearch monitoring cluster, you can simply
    # uncomment the following line.
    monitoring.elasticsearch:

      # Array of hosts to connect to.
      # Scheme and port can be left out and will be set to the default (http and 9200)
      # In case you specify and additional path, the scheme is required: http://localhost:9200/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
      #hosts: ["localhost:9200"]

      # Set gzip compression level.
      #compression_level: 0

      # Protocol - either `http` (default) or `https`.
      protocol: "http"

      # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
      # The default is 50.
      bulk_max_size: 50

      # The number of seconds to wait before trying to reconnect to Elasticsearch
      # after a network error. After waiting backoff.init seconds, the Beat
      # tries to reconnect. If the attempt fails, the backoff timer is increased
      # exponentially up to backoff.max. After a successful connection, the backoff
      # timer is reset. The default is 1s.
      backoff.init: 1s

      # The maximum number of seconds to wait before attempting to connect to
      # Elasticsearch after a network error. The default is 60s.
      backoff.max: 60s

      # Configure HTTP request timeout before failing an request to Elasticsearch.
      timeout: 90

      metrics.period: 10s
      state.period: 1m

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kube-logging
  labels:
    app: metricbeat
data:
  kubernetes.yml: |-
    - module: kubernetes
      enabled: true
      metricsets:
        - state_node
        - state_daemonset
        - state_deployment
        - state_replicaset
        - state_statefulset
        - state_pod
        - state_container
        - state_cronjob
        - state_resourcequota
        - state_service
        - state_persistentvolume
        - state_persistentvolumeclaim
        - state_storageclass
      period: 10s
      hosts: ["kube-state-metrics:8080"]
      add_metadata: true
      in_cluster: true

  elasticsearch.yml: |-
    - module: elasticsearch
      xpack.enabled: true
      period: 10s
      hosts: ["http://elasticsearch:9200"]
      username: "elastic"
      password: "changeme"
      metricsets:
        - node
        - node_stats
        - index
        - index_recovery
        - index_summary
        - shard
        #- ml_job

  # traefik.yml: |-
  #   - module: traefik
  #     metricsets: ["health"]
  #     period: 10s
  #     hosts: ["localhost:8080"]

  kibana.yml: |-
    - module: kibana
      metricsets: ["status"]
      period: 10s
      hosts: ["kibana:5601"]
      basepath: ""
      enabled: true

---

# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricbeat
  namespace: kube-logging
  labels:
    app: metricbeat
spec:
  selector:
    matchLabels:
      app: metricbeat
  template:
    metadata:
      labels:
        app: metricbeat
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.17.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 200m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: policy
          mountPath: /usr/share/metricbeat/policy
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
      - name: policy
        configMap:
          defaultMode: 0600
          name: metricbeat-policy

---

Getting this error now

2022-03-04T14:34:46.892Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-04T14:34:46.969Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-04T14:34:46.981Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:46.983Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:47.078Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:47.169Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:47.218Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:56.892Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-04T14:34:56.970Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-04T14:34:56.982Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:56.982Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:57.077Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:57.169Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-04T14:34:57.218Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request
2022-03-04T14:35:06.893Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.scheduler: error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp 127.0.0.1:10251: connect: connection refused
2022-03-04T14:35:06.970Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.controllermanager: error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp 127.0.0.1:10252: connect: connection refused
2022-03-04T14:35:06.985Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:35:06.985Z	ERROR	module/wrapper.go:259	Error fetching data for metricset kubernetes.volume: error doing HTTP request to fetch 'volume' Metricset data: HTTP error 400 in : 400 Bad Request
2022-03-04T14:35:07.078Z	ERROR	[kubernetes.node]	node/node.go:95	HTTP error 400 in : 400 Bad Request
2022-03-04T14:35:07.169Z	ERROR	[kubernetes.container]	container/container.go:93	HTTP error 400 in : 400 Bad Request
2022-03-04T14:35:07.217Z	ERROR	[kubernetes.pod]	pod/pod.go:94	HTTP error 400 in : 400 Bad Request

I think this is regarding

  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - volume
        - event
        - container
      period: 10s
      enabled: true
      hosts: ["localhost:10250"]
      add_metadata: true
    - module: kubernetes
      enabled: true
      metricsets:
        - proxy
      hosts: ["localhost:10249"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - controllermanager
      hosts: ["localhost:10252"]
      period: 10s
    - module: kubernetes
      enabled: true
      metricsets:
        - scheduler
      hosts: ["localhost:10251"]
      period: 10s

What is wrong here ?

[ I am running ELB stack on AWS EKS ]

I would start from our reference config and work from there I deployed it a number of times and it has worked for me (and others)

https://raw.githubusercontent.com/elastic/beats/8.0/deploy/kubernetes/metricbeat-kubernetes.yaml

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.