Metricbeat error: missing field accessing 'metricbeat.modules.21.hosts.0

Hello guys. i installed metricbeat on my centos 8 machine. with this configuration file.

########################## Metricbeat Configuration ###########################

# This file is a full configuration example documenting all non-deprecated
# options in comments. For a shorter configuration example, that contains only
# the most common options, please see metricbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#============================  Config Reloading ===============================

# Config reloading allows to dynamically load modules. Each file which is
# monitored must contain one or multiple modules as a list.
metricbeat.config.modules:

  # Glob pattern for configuration reloading
  path: ${path.config}/modules.d/*.yml

  # Period on which files under path should be checked for changes
  reload.period: 10s

  # Set to true to enable config reloading
  reload.enabled: false

# Maximum amount of time to randomly delay the start of a metricset. Use 0 to
# disable startup delay.
metricbeat.max_start_delay: 10s

#============================== Autodiscover ===================================

# Autodiscover allows you to detect changes in the system and spawn new modules
# as they happen.

#metricbeat.autodiscover:
  # List of enabled autodiscover providers
#  providers:
#    - type: docker
#      templates:
#        - condition:
#            equals.docker.container.image: etcd
#          config:
#            - module: etcd
#              metricsets: ["leader", "self", "store"]
#              period: 10s
#              hosts: ["${host}:2379"]

#=========================== Timeseries instance ===============================

# Enabling this will add a `timeseries.instance` keyword field to all metric
# events. For a given metricset, this field will be unique for every single item
# being monitored.
# This setting is experimental.

#timeseries.enabled: false


#==========================  Modules configuration =============================
metricbeat.modules:

#-------------------------------- System Module --------------------------------
- module: system
  metricsets:
    - cpu             # CPU usage
    - load            # CPU load averages
    - memory          # Memory usage
    - network         # Network IO
    - process         # Per process metrics
    - process_summary # Process summary
    - uptime          # System Uptime
    - socket_summary  # Socket summary
    #- core           # Per CPU core usage
    #- diskio         # Disk IO
    #- filesystem     # File system usage for each mountpoint
    #- fsstat         # File system summary metrics
    #- raid           # Raid
    #- socket         # Sockets and connection info (linux only)
    #- service        # systemd service information
  enabled: true
  period: 10s
  processes: ['.*']

  # Configure the mount point of the host’s filesystem for use in monitoring a host from within a container
  #hostfs: "/hostfs"

  # Configure the metric types that are included by these metricsets.
  cpu.metrics:  ["percentages","normalized_percentages"]  # The other available option is ticks.
  core.metrics: ["percentages"]  # The other available option is ticks.

  # A list of filesystem types to ignore. The filesystem metricset will not
  # collect data from filesystems matching any of the specified types, and
  # fsstats will not include data from these filesystems in its summary stats.
  # If not set, types associated to virtual filesystems are automatically
  # added when this information is available in the system (e.g. the list of
  # `nodev` types in `/proc/filesystem`).
  #filesystem.ignore_types: []

  # These options allow you to filter out all processes that are not
  # in the top N by CPU or memory, in order to reduce the number of documents created.
  # If both the `by_cpu` and `by_memory` options are used, the union of the two sets
  # is included.
  #process.include_top_n:

    # Set to false to disable this feature and include all processes
    #enabled: true

    # How many processes to include from the top by CPU. The processes are sorted
    # by the `system.process.cpu.total.pct` field.
    #by_cpu: 0

    # How many processes to include from the top by memory. The processes are sorted
    # by the `system.process.memory.rss.bytes` field.
    #by_memory: 0

  # If false, cmdline of a process is not cached.
  #process.cmdline.cache.enabled: true

  # Enable collection of cgroup metrics from processes on Linux.
  #process.cgroups.enabled: true

  # A list of regular expressions used to whitelist environment variables
  # reported with the process metricset's events. Defaults to empty.
  #process.env.whitelist: []

  # Include the cumulative CPU tick values with the process metrics. Defaults
  # to false.
  #process.include_cpu_ticks: false

  # Raid mount point to monitor
  #raid.mount_point: '/'

  # Configure reverse DNS lookup on remote IP addresses in the socket metricset.
  #socket.reverse_lookup.enabled: false
  #socket.reverse_lookup.success_ttl: 60s
  #socket.reverse_lookup.failure_ttl: 60s

  # Diskio configurations
  #diskio.include_devices: []

  # Filter systemd services by status or sub-status
  #service.state_filter: ["active"]

  # Filter systemd services based on a name pattern
  #service.pattern_filter: ["ssh*", "nfs*"]

#------------------------------ Aerospike Module ------------------------------
- module: aerospike
  metricsets: ["namespace"]
  enabled: true
  period: 10s
  hosts: ["localhost:3000"]

#-------------------------------- Apache Module --------------------------------
- module: apache
  metricsets: ["status"]
  period: 10s
  enabled: true

  # Apache hosts
  hosts: ["http://127.0.0.1"]

  # Path to server status. Default server-status
  #server_status_path: "server-status"

  # Username of hosts.  Empty by default
  #username: username

  # Password of hosts. Empty by default
  #password: password

#--------------------------------- Beat Module ---------------------------------
- module: beat
  metricsets:
    - stats
    - state
  period: 10s
  hosts: ["http://localhost:5066"]
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Set to true to send data collected by module to X-Pack
  # Monitoring instead of metricbeat-* indices.
  #xpack.enabled: false

#--------------------------------- Ceph Module ---------------------------------
# Metricsets depending on the Ceph REST API (default port: 5000)
- module: ceph
  metricsets: ["cluster_disk", "cluster_health", "monitor_health", "pool_disk", "osd_tree"]
  period: 10s
  hosts: ["localhost:5000"]
  enabled: true

# Metricsets depending on the Ceph Manager Daemon (default port: 8003)
- module: ceph
  metricsets:
    - mgr_cluster_disk
    - mgr_osd_perf
    - mgr_pool_disk
    - mgr_osd_pool_stats
    - mgr_osd_tree
  period: 1m
  hosts: [ "https://localhost:8003" ]
  #username: "user"
  #password: "secret"

#-------------------------------- Consul Module --------------------------------
- module: consul
  metricsets:
  - agent
  enabled: true
  period: 10s
  hosts: ["localhost:8500"]


#------------------------------ Couchbase Module ------------------------------
- module: couchbase
  metricsets: ["bucket", "cluster", "node"]
  period: 10s
  hosts: ["localhost:8091"]
  enabled: true

#------------------------------- CouchDB Module -------------------------------
- module: couchdb
  metricsets: ["server"]
  period: 10s
  hosts: ["localhost:5984"]

#-------------------------------- Docker Module --------------------------------
- module: docker
  metricsets:
    - "container"
    - "cpu"
    - "diskio"
    - "event"
    - "healthcheck"
    - "info"
    #- "image"
    - "memory"
    - "network"
    #- "network_summary"
  hosts: ["unix:///var/run/docker.sock"]
  period: 10s
  enabled: true

  # If set to true, replace dots in labels with `_`.
  #labels.dedot: false

  # If set to true, collects metrics per core.
  #cpu.cores: true

  # To connect to Docker over TLS you must specify a client and CA certificate.
  #ssl:
    #certificate_authority: "/etc/pki/root/ca.pem"
    #certificate:           "/etc/pki/client/cert.pem"
    #key:                   "/etc/pki/client/cert.key"

#------------------------------ Dropwizard Module ------------------------------
- module: dropwizard
  metricsets: ["collector"]
  period: 10s
  hosts: ["localhost:8080"]
  metrics_path: /metrics/metrics
  namespace: example
  enabled: true

#---------------------------- Elasticsearch Module ----------------------------
- module: elasticsearch
  metricsets:
    - node
    - node_stats
    #- index
    #- index_recovery
    #- index_summary
    #- shard
    #- ml_job
  period: 10s
  hosts: ["http://localhost:9200"]
  #username: "elastic"
  #password: "changeme"
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  #index_recovery.active_only: true
  #xpack.enabled: false
  #scope: node

#------------------------------ Envoyproxy Module ------------------------------
- module: envoyproxy
  metricsets: ["server"]
  period: 10s
  hosts: ["localhost:9901"]

#--------------------------------- Etcd Module ---------------------------------
- module: etcd
  metricsets: ["leader", "self", "store"]
  period: 10s
  hosts: ["localhost:2379"]

#-------------------------------- Golang Module --------------------------------
- module: golang
  metricsets:
    - expvar
    - heap
  period: 10s
  hosts: ["localhost:6060"]
  heap.path: "/debug/vars"
  expvar:
    namespace: "example"
    path: "/debug/vars"

#------------------------------- Graphite Module -------------------------------
- module: graphite
  metricsets: ["server"]
  enabled: true

  # Host address to listen on. Default localhost.
  #host: localhost

  # Listening port. Default 2003.
  #port: 2003

  # Protocol to listen on. This can be udp or tcp. Default udp.
  #protocol: "udp"

  # Receive buffer size in bytes
  #receive_buffer_size: 1024

  #templates:
  #  - filter: "test.*.bash.*" # This would match metrics like test.localhost.bash.stats
  #    namespace: "test"
  #    template: ".host.shell.metric*" # test.localhost.bash.stats would become metric=stats and tags host=localhost,shell=bash
  #    delimiter: "_"


#------------------------------- HAProxy Module -------------------------------
- module: haproxy
  metricsets: ["info", "stat"]
  period: 10s
  # TCP socket, UNIX socket, or HTTP address where HAProxy stats are reported
  # TCP socket
  hosts: ["tcp://127.0.0.1:14567"]
  # UNIX socket
  #hosts: ["unix:///path/to/haproxy.sock"]
  # Stats page
  #hosts: ["http://127.0.0.1:14567"]
  username : "admin"
  password : "admin"
  enabled: true

#--------------------------------- HTTP Module ---------------------------------
- module: http
  metricsets:
    - json
  period: 10s
  hosts: ["localhost:80"]
  namespace: "json_namespace"
  path: "/"
  #body: ""
  #method: "GET"
  #username: "user"
  #password: "secret"
  #request.enabled: false
  #response.enabled: false
  #json.is_array: false
  #dedot.enabled: false

- module: http
  #metricsets:
  #  - server
  host: "localhost"
  port: "8080"
  enabled: false
  #paths:
  #  - path: "/foo"
  #    namespace: "foo"
  #    fields: # added to the the response in root. overwrites existing fields
  #      key: "value"

#------------------------------- Jolokia Module -------------------------------
#- module: jolokia
 # metricsets: ["jmx"]
 # period: 10s
  #hosts: ["localhost"]
  namespace: "metrics"
  #path: "/jolokia/?ignoreErrors=true&canonicalNaming=false"
  #username: "user"
  #password: "secret"
  #jmx.mappings:
    #- mbean: 'java.lang:type=Runtime'
    #  attributes:
    #    - attr: Uptime
    #      field: uptime
    #- mbean: 'java.lang:type=Memory'
    #  attributes:
    #    - attr: HeapMemoryUsage
    #      field: memory.heap_usage
    #    - attr: NonHeapMemoryUsage
    #      field: memory.non_heap_usage
    # GC Metrics - this depends on what is available on your JVM
    #- mbean: 'java.lang:type=GarbageCollector,name=ConcurrentMarkSweep'
    #  attributes:
    #    - attr: CollectionTime
    #      field: gc.cms_collection_time
    #    - attr: CollectionCount
    #      field: gc.cms_collection_count

  #jmx.application:
  #jmx.instance:

#-------------------------------- Kafka Module --------------------------------
# Kafka metrics collected using the Kafka protocol
- module: kafka
  #metricsets:
  #  - partition
  #  - consumergroup
  period: 10s
  hosts: ["localhost:9092"]

  #client_id: metricbeat
  #retries: 3
  #backoff: 250ms

  # List of Topics to query metadata for. If empty, all topics will be queried.
  #topics: []

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

  # Client Certificate Passphrase (in case your Client Certificate Key is encrypted)
  #ssl.key_passphrase: "yourKeyPassphrase"

  # SASL authentication
  #username: ""
  #password: ""

  # SASL authentication mechanism used. Can be one of PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512.
  # Defaults to PLAIN when `username` and `password` are configured.
  #sasl.mechanism: ''

# Metrics collected from a Kafka broker using Jolokia
#- module: kafka
#  metricsets:
#    - broker
#  period: 10s
#  hosts: ["localhost:8779"]

# Metrics collected from a Java Kafka consumer using Jolokia
#- module: kafka
#  metricsets:
#    - consumer
#  period: 10s
#  hosts: ["localhost:8774"]

# Metrics collected from a Java Kafka producer using Jolokia
#- module: kafka
#  metricsets:
#    - producer
#  period: 10s
#  hosts: ["localhost:8775"]

#-------------------------------- Kibana Module --------------------------------
- module: kibana
  metricsets: ["status"]
  period: 10s
  hosts: ["localhost:5601"]
  basepath: ""
  enabled: true

  # Set to true to send data collected by module to X-Pack
  # Monitoring instead of metricbeat-* indices.
  #xpack.enabled: false

#------------------------------ Kubernetes Module ------------------------------
# Node metrics, from kubelet:
- module: kubernetes
  metricsets:
    - container
    - node
    - pod
    - system
    - volume
  period: 10s
  enabled: true
  hosts: ["https://${NODE_NAME}:10250"]
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  ssl.verification_mode: "none"
  #ssl.certificate_authorities:
  #  - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
  #ssl.certificate: "/etc/pki/client/cert.pem"
  #ssl.key: "/etc/pki/client/cert.key"

  # Enriching parameters:
  add_metadata: true
  # When used outside the cluster:
  #node: node_name
  # If kube_config is not set, KUBECONFIG environment variable will be checked
  # and if not present it will fall back to InCluster
  #kube_config: ~/.kube/config
  # To configure additionally node and namespace metadata, added to pod, service and container resource types,
  # `add_resource_metadata` can be defined.
  # By default all labels will be included while annotations are not added by default.
  # add_resource_metadata:
  #   namespace:
  #     include_labels: ["namespacelabel1"]
  #   node:
  #     include_labels: ["nodelabel2"]
  #     include_annotations: ["nodeannotation1"]
  #   deployment: false
  # Kubernetes client QPS and burst can be configured additionally
  #kube_client_options:
  #  qps: 5
  #  burst: 10

# State metrics from kube-state-metrics service:
- module: kubernetes
  enabled: true
  metricsets:
    - state_node
    - state_daemonset
    - state_deployment
    - state_replicaset
    - state_statefulset
    - state_pod
    - state_container
    - state_job
    - state_cronjob
    - state_resourcequota
    - state_service
    - state_persistentvolume
    - state_persistentvolumeclaim
    - state_storageclass
    # Uncomment this to get k8s events:
    #- event  period: 10s
  hosts: ["kube-state-metrics:8080"]

  # Enriching parameters:
  add_metadata: true
  # When used outside the cluster:
  #node: node_name
  # If kube_config is not set, KUBECONFIG environment variable will be checked
  # and if not present it will fall back to InCluster
  #kube_config: ~/.kube/config
  # Set the namespace to watch for resources
  #namespace: staging
  # To configure additionally node and namespace metadata, added to pod, service and container resource types,
  # `add_resource_metadata` can be defined.
  # By default all labels will be included while annotations are not added by default.
  # add_resource_metadata:
  #   namespace:
  #     include_labels: ["namespacelabel1"]
  #   node:
  #     include_labels: ["nodelabel2"]
  #     include_annotations: ["nodeannotation1"]
  #   deployment: false
  #   cronjob: false
  # Kubernetes client QPS and burst can be configured additionally
  #kube_client_options:
  #  qps: 5
  #  burst: 10

# Kubernetes Events
- module: kubernetes
  enabled: true
  metricsets:
    - event
  period: 10s
  # Skip events older than Metricbeat's statup time is enabled by default.
  # Setting to false the skip_older setting will stop filtering older events.
  # This setting is also useful went Event's timestamps are not populated properly.
  #skip_older: false
  # If kube_config is not set, KUBECONFIG environment variable will be checked
  # and if not present it will fall back to InCluster
  #kube_config: ~/.kube/config
  # Set the namespace to watch for events
  #namespace: staging
  # Set the sync period of the watchers
  #sync_period: 10m
  # Kubernetes client QPS and burst can be configured additionally
  #kube_client_options:
  #  qps: 5
  #  burst: 10

# Kubernetes API server
# (when running metricbeat as a deployment)
- module: kubernetes
  enabled: true
  metricsets:
    - apiserver
  hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  ssl.certificate_authorities:
    - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  period: 30s

# Kubernetes proxy server
# (when running metricbeat locally at hosts or as a daemonset + host network)
- module: kubernetes
  enabled: true
  metricsets:
    - proxy
  hosts: ["localhost:10249"]
  period: 10s

# Kubernetes controller manager
# (URL and deployment method should be adapted to match the controller manager deployment / service / endpoint)
- module: kubernetes
  enabled: true
  metricsets:
    - controllermanager
  hosts: ["http://localhost:10252"]
  period: 10s

# Kubernetes scheduler
# (URL and deployment method should be adapted to match scheduler deployment / service / endpoint)
- module: kubernetes
  enabled: true
  metricsets:
    - scheduler
  hosts: ["localhost:10251"]
  period: 10s

#--------------------------------- KVM Module ---------------------------------
- module: kvm
  metricsets: ["dommemstat", "status"]
  enabled: true
  period: 10s
  hosts: ["unix:///var/run/libvirt/libvirt-sock"]
  # For remote hosts, setup network access in libvirtd.conf
  # and use the tcp scheme:
  # hosts: [ "tcp://<host>:16509" ]

  # Timeout to connect to Libvirt server
  #timeout: 1s

#-------------------------------- Linux Module --------------------------------
- module: linux
  period: 10s
  metricsets:
    - "pageinfo"
    - "memory"
    # - ksm
    # - conntrack
    # - iostat
    # - pressure
    # - rapl
  enabled: true
  #hostfs: /hostfs
  #rapl.use_msr_safe: false


#------------------------------- Logstash Module -------------------------------
- module: logstash
  metricsets: ["node", "node_stats"]
  enabled: true
  period: 10s
  hosts: ["localhost:9600"]

#------------------------------ Memcached Module ------------------------------
- module: memcached
  metricsets: ["stats"]
  period: 10s
  hosts: ["localhost:11211"]
  enabled: true

#------------------------------- MongoDB Module -------------------------------
- module: mongodb
  metricsets: ["dbstats", "status", "collstats", "metrics", "replstatus"]
  period: 10s
  enabled: true

  # The hosts must be passed as MongoDB URLs in the format:
  # [mongodb://][user:pass@]host[:port].
  # The username and password can also be set using the respective configuration
  # options. The credentials in the URL take precedence over the username and
  # password configuration options.
  hosts: ["localhost:27017"]

  # Optional SSL. By default is off.
  #ssl.enabled: true

  # Mode of verification of server certificate ('none' or 'full')
  #ssl.verification_mode: 'full'

  # List of root certificates for TLS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

  # Username to use when connecting to MongoDB. Empty by default.
  #username: user

  # Password to use when connecting to MongoDB. Empty by default.
  #password: pass

#-------------------------------- Munin Module --------------------------------
- module: munin
  metricsets: ["node"]
  enabled: true
  period: 10s
  hosts: ["localhost:4949"]

  # List of plugins to collect metrics from, by default it collects from
  # all the available ones.
  #munin.plugins: []

  # If set to true, it sanitizes fields names in concordance with munin
  # implementation (all characters that are not alphanumeric, or underscore
  # are replaced by underscores).
  #munin.sanitize: false

#-------------------------------- MySQL Module --------------------------------
- module: mysql
  metricsets:
    - status
  #  - galera_status
  #  - performance
  #  - query
  period: 10s

  # Host DSN should be defined as "user:pass@tcp(127.0.0.1:3306)/"
  # or "unix(/var/lib/mysql/mysql.sock)/",
  # or another DSN format supported by <https://github.com/Go-SQL-Driver/MySQL/>.
  # The username and password can either be set in the DSN or using the username
  # and password config options. Those specified in the DSN take precedence.
  hosts: ["root:secret@tcp(127.0.0.1:3306)/"]

  # Username of hosts. Empty by default.
  #username: root

  # Password of hosts. Empty by default.
  #password: secret

  # By setting raw to true, all raw fields from the status metricset will be added to the event.
  #raw: false

#--------------------------------- NATS Module ---------------------------------
- module: nats
  metricsets:
    - "connections"
    - "routes"
    - "stats"
    - "subscriptions"
    #- "connection"
    #- "route"
  period: 10s
  hosts: ["localhost:8222"]
  #stats.metrics_path: "/varz"
  #connections.metrics_path: "/connz"
  #routes.metrics_path: "/routez"
  #subscriptions.metrics_path: "/subsz"
  #connection.metrics_path: "/connz"
  #route.metrics_path: "/routez"

#-------------------------------- Nginx Module --------------------------------
- module: nginx
  metricsets: ["stubstatus"]
  enabled: true
  period: 10s

  # Nginx hosts
  hosts: ["http://127.0.0.1"]

  # Path to server status. Default nginx_status
  server_status_path: "nginx_status"

#----------------------------- Openmetrics Module -----------------------------
- module: openmetrics
  metricsets: ['collector']
  period: 10s
  hosts: ['localhost:9090']

  # This module uses the Prometheus collector metricset, all
  # the options for this metricset are also available here.
  metrics_path: /metrics
  metrics_filters:
    include: []
    exclude: []

#------------------------------- PHP_FPM Module -------------------------------
- module: php_fpm
  metricsets:
  - pool
  #- process
  enabled: true
  period: 10s
  status_path: "/status"
  hosts: ["localhost:8080"]

#------------------------------ PostgreSQL Module ------------------------------
- module: postgresql
  enabled: true
  metricsets:
    # Stats about every PostgreSQL database
    - database

    # Stats about the background writer process's activity
    - bgwriter

    # Stats about every PostgreSQL process
    - activity

    # Stats about every statement executed in the server. It requires the
    # `pg_stats_statement` library to be configured in the server.
    #- statement

  period: 10s

  # The host must be passed as PostgreSQL URL. Example:
  # postgres://localhost:5432?sslmode=disable
  # The available parameters are documented here:
  # https://godoc.org/github.com/lib/pq#hdr-Connection_String_Parameters
  hosts: ["postgres://localhost:5432"]

  # Username to use when connecting to PostgreSQL. Empty by default.
  #username: user

  # Password to use when connecting to PostgreSQL. Empty by default.
  #password: pass

#------------------------------ Prometheus Module ------------------------------
# Metrics collected from a Prometheus endpoint
- module: prometheus
  period: 10s
  metricsets: ["collector"]
  hosts: ["localhost:9090"]
  metrics_path: /metrics
  #metrics_filters:
  #  include: []
  #  exclude: []
  #username: "user"
  #password: "secret"

  # This can be used for service account based authorization:
  #bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  #ssl.certificate_authorities:
  #  - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt


# Metrics sent by a Prometheus server using remote_write option
#- module: prometheus
#  metricsets: ["remote_write"]
#  host: "localhost"
#  port: "9201"

  # Secure settings for the server using TLS/SSL:
  #ssl.certificate: "/etc/pki/server/cert.pem"
  #ssl.key: "/etc/pki/server/cert.key"

# Metrics that will be collected using a PromQL
#- module: prometheus
#  metricsets: ["query"]
#  hosts: ["localhost:9090"]
#  period: 10s
#  queries:
#  - name: "instant_vector"
#    path: "/api/v1/query"
#    params:
#      query: "sum(rate(prometheus_http_requests_total[1m]))"
#  - name: "range_vector"
#    path: "/api/v1/query_range"
#    params:
#      query: "up"
#      start: "2019-12-20T00:00:00.000Z"
#      end:  "2019-12-21T00:00:00.000Z"
#      step: 1h
#  - name: "scalar"
#    path: "/api/v1/query"
#    params:
#      query: "100"
#  - name: "string"
#    path: "/api/v1/query"
#    params:
#      query: "some_value"

#------------------------------- RabbitMQ Module -------------------------------
- module: rabbitmq
  metricsets: ["node", "queue", "connection"]
  enabled: true
  period: 10s
  hosts: ["localhost:15672"]

  # Management path prefix, if `management.path_prefix` is set in RabbitMQ
  # configuration, it has to be set to the same value.
  #management_path_prefix: ""

  #username: guest
  #password: guest

#-------------------------------- Redis Module --------------------------------
- module: redis
  metricsets: ["info", "keyspace"]
  enabled: true
  period: 10s

  # Redis hosts
  hosts: ["127.0.0.1:6379"]

  # Timeout after which time a metricset should return an error
  # Timeout is by default defined as period, as a fetch of a metricset
  # should never take longer then period, as otherwise calls can pile up.
  #timeout: 1s

  # Optional fields to be added to each event
  #fields:
  #  datacenter: west

  # Network type to be used for redis connection. Default: tcp
  #network: tcp

  # Max number of concurrent connections. Default: 10
  #maxconn: 10

  # Filters can be used to reduce the number of fields sent.
  #processors:
  #  - include_fields:
  #      fields: ["beat", "metricset", "redis.info.stats"]

  # Redis AUTH password. Empty by default.
  #password: foobared

#------------------------------- Traefik Module -------------------------------
- module: traefik
  metricsets: ["health"]
  period: 10s
  hosts: ["localhost:8080"]

#-------------------------------- UWSGI Module --------------------------------
- module: uwsgi
  metricsets: ["status"]
  enable: true
  period: 10s
  hosts: ["tcp://127.0.0.1:9191"]

#------------------------------- VSphere Module -------------------------------
- module: vsphere
  enabled: true
  metricsets: ["datastore", "host", "virtualmachine"]
  period: 10s
  hosts: ["https://localhost/sdk"]

  username: "user"
  password: "password"
  # If insecure is true, don't verify the server's certificate chain
  insecure: false
  # Get custom fields when using virtualmachine metric set. Default false.
  # get_custom_fields: false

#------------------------------- Windows Module -------------------------------
- module: windows
  metricsets: ["perfmon"]
  enabled: true
  period: 10s
  perfmon.ignore_non_existent_counters: false
  perfmon.group_measurements_by_instance: false
  perfmon.queries:
#  - object: 'Process'
#    instance: ["*"]
#    counters:
#    - name: '% Processor Time'
#      field: cpu_usage
#      format: "float"
#    - name: "Thread Count"

- module: windows
  metricsets: ["service"]
  enabled: true
  period: 60s

#------------------------------ ZooKeeper Module ------------------------------
- module: zookeeper
  enabled: true
  metricsets: ["mntr", "server"]
  period: 10s
  hosts: ["localhost:2181"]







# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # Boolean flag to enable or disable the output module.
  enabled: true

  # The Logstash hosts
  hosts: ["10.112.42.72:5044"]

  # Number of workers per Logstash host.
  #worker: 1





# Elasticsearch template settings
setup.template.settings:

  # A dictionary of settings to place into the settings.index dictionary
  # of the Elasticsearch template. For more details, please check
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
  #index:
    #number_of_shards: 1
    #codec: best_compression


# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

host:  "10.112.42.71:5601"

# Logging to rotating files. Set logging.to_files to false to disable logging to
# files.
logging.to_files: true
logging.files:
  # Configure the path where the logs are written. The default is the logs directory
  # under the home path (the binary location).
  path: /var/log/metricbeat

  # The name of the files where the logs are written to.
  name: metricbeat

this is my /etc/metricbeat/metricbeat.yml i changed the output to logstash cuz i have a running elk stack on my kubernetes cluster.

but i get this error.

 Exiting: missing field accessing 'metricbeat.modules.21.hosts.0' (source:'/etc/metricbeat/metricbeat.yml')

any idea on how to resolve this problem ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.